I0604 23:38:47.570114 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0604 23:38:47.570390 7 e2e.go:129] Starting e2e run "4bea8d16-e345-4d47-bfb5-c0567c47b5c7" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591313926 - Will randomize all specs Will run 288 of 5095 specs Jun 4 23:38:47.624: INFO: >>> kubeConfig: /root/.kube/config Jun 4 23:38:47.628: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 4 23:38:47.656: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 4 23:38:47.696: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 4 23:38:47.696: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 4 23:38:47.696: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 4 23:38:47.707: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 4 23:38:47.707: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 4 23:38:47.707: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb Jun 4 23:38:47.708: INFO: kube-apiserver version: v1.18.2 Jun 4 23:38:47.708: INFO: >>> kubeConfig: /root/.kube/config Jun 4 23:38:47.714: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:38:47.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jun 4 23:38:47.761: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Jun 4 23:38:47.789: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:38:47.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9286" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:38:47.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6530.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6530.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6530.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6530.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 4 23:38:54.191: INFO: DNS probes using dns-6530/dns-test-d79e43fc-6deb-4c73-b683-654c0ac24dd9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:38:54.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6530" for this suite. • [SLOW TEST:6.393 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:38:54.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:39:07.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2991" for this suite. • [SLOW TEST:13.479 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":3,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:39:07.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8142 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-8142 Jun 4 23:39:07.952: INFO: Found 0 stateful pods, waiting for 1 Jun 4 23:39:17.956: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 4 23:39:17.990: INFO: Deleting all statefulset in ns statefulset-8142 Jun 4 23:39:18.006: INFO: Scaling statefulset ss to 0 Jun 4 23:39:38.119: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 23:39:38.122: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:39:38.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8142" for this suite. • [SLOW TEST:30.362 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":4,"skipped":97,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:39:38.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3061 Jun 4 23:39:40.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 4 23:39:43.587: INFO: stderr: "I0604 23:39:43.425368 47 log.go:172] (0xc000a182c0) (0xc000616f00) Create stream\nI0604 23:39:43.425457 47 log.go:172] (0xc000a182c0) (0xc000616f00) Stream added, broadcasting: 1\nI0604 23:39:43.440427 47 log.go:172] (0xc000a182c0) Reply frame received for 1\nI0604 23:39:43.440489 47 log.go:172] (0xc000a182c0) (0xc000610c80) Create stream\nI0604 23:39:43.440514 47 log.go:172] (0xc000a182c0) (0xc000610c80) Stream added, broadcasting: 3\nI0604 23:39:43.441858 47 log.go:172] (0xc000a182c0) Reply frame received for 3\nI0604 23:39:43.441890 47 log.go:172] (0xc000a182c0) (0xc000611c20) Create stream\nI0604 23:39:43.441897 47 log.go:172] (0xc000a182c0) (0xc000611c20) Stream added, broadcasting: 5\nI0604 23:39:43.442994 47 log.go:172] (0xc000a182c0) Reply frame received for 5\nI0604 23:39:43.529961 47 log.go:172] (0xc000a182c0) Data frame received for 5\nI0604 23:39:43.529992 47 log.go:172] (0xc000611c20) (5) Data frame handling\nI0604 23:39:43.530015 47 log.go:172] (0xc000611c20) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0604 23:39:43.577311 47 log.go:172] (0xc000a182c0) Data frame received for 3\nI0604 23:39:43.577340 47 log.go:172] (0xc000610c80) (3) Data frame handling\nI0604 23:39:43.577361 47 log.go:172] (0xc000610c80) (3) Data frame sent\nI0604 23:39:43.578344 47 log.go:172] (0xc000a182c0) Data frame received for 3\nI0604 23:39:43.578392 47 log.go:172] (0xc000a182c0) Data frame received for 5\nI0604 23:39:43.578440 47 log.go:172] (0xc000611c20) (5) Data frame handling\nI0604 23:39:43.578476 47 log.go:172] (0xc000610c80) (3) Data frame handling\nI0604 23:39:43.580917 47 log.go:172] (0xc000a182c0) Data frame received for 1\nI0604 23:39:43.580935 47 log.go:172] (0xc000616f00) (1) Data frame handling\nI0604 23:39:43.580943 47 log.go:172] (0xc000616f00) (1) Data frame sent\nI0604 23:39:43.580950 47 log.go:172] (0xc000a182c0) (0xc000616f00) Stream removed, broadcasting: 1\nI0604 23:39:43.580961 47 log.go:172] (0xc000a182c0) Go away received\nI0604 23:39:43.581452 47 log.go:172] (0xc000a182c0) (0xc000616f00) Stream removed, broadcasting: 1\nI0604 23:39:43.581472 47 log.go:172] (0xc000a182c0) (0xc000610c80) Stream removed, broadcasting: 3\nI0604 23:39:43.581481 47 log.go:172] (0xc000a182c0) (0xc000611c20) Stream removed, broadcasting: 5\n" Jun 4 23:39:43.587: INFO: stdout: "iptables" Jun 4 23:39:43.587: INFO: proxyMode: iptables Jun 4 23:39:43.605: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:39:43.653: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:39:45.653: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:39:45.658: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:39:47.653: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:39:47.658: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:39:49.653: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:39:49.658: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:39:51.653: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:39:51.657: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:39:53.653: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:39:53.658: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:39:55.653: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:39:55.657: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-3061 STEP: creating replication controller affinity-nodeport-timeout in namespace services-3061 I0604 23:39:55.765682 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3061, replica count: 3 I0604 23:39:58.816128 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0604 23:40:01.816411 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 4 23:40:01.833: INFO: Creating new exec pod Jun 4 23:40:06.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jun 4 23:40:07.112: INFO: stderr: "I0604 23:40:06.987885 79 log.go:172] (0xc00096d1e0) (0xc000a7c1e0) Create stream\nI0604 23:40:06.987971 79 log.go:172] (0xc00096d1e0) (0xc000a7c1e0) Stream added, broadcasting: 1\nI0604 23:40:06.992821 79 log.go:172] (0xc00096d1e0) Reply frame received for 1\nI0604 23:40:06.994509 79 log.go:172] (0xc00096d1e0) (0xc0006cad20) Create stream\nI0604 23:40:06.994535 79 log.go:172] (0xc00096d1e0) (0xc0006cad20) Stream added, broadcasting: 3\nI0604 23:40:06.995406 79 log.go:172] (0xc00096d1e0) Reply frame received for 3\nI0604 23:40:06.995655 79 log.go:172] (0xc00096d1e0) (0xc000580460) Create stream\nI0604 23:40:06.995670 79 log.go:172] (0xc00096d1e0) (0xc000580460) Stream added, broadcasting: 5\nI0604 23:40:06.997268 79 log.go:172] (0xc00096d1e0) Reply frame received for 5\nI0604 23:40:07.103545 79 log.go:172] (0xc00096d1e0) Data frame received for 5\nI0604 23:40:07.103577 79 log.go:172] (0xc000580460) (5) Data frame handling\nI0604 23:40:07.103600 79 log.go:172] (0xc000580460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0604 23:40:07.106203 79 log.go:172] (0xc00096d1e0) Data frame received for 5\nI0604 23:40:07.106242 79 log.go:172] (0xc000580460) (5) Data frame handling\nI0604 23:40:07.106268 79 log.go:172] (0xc000580460) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0604 23:40:07.106747 79 log.go:172] (0xc00096d1e0) Data frame received for 5\nI0604 23:40:07.106769 79 log.go:172] (0xc000580460) (5) Data frame handling\nI0604 23:40:07.106794 79 log.go:172] (0xc00096d1e0) Data frame received for 3\nI0604 23:40:07.106814 79 log.go:172] (0xc0006cad20) (3) Data frame handling\nI0604 23:40:07.107818 79 log.go:172] (0xc00096d1e0) Data frame received for 1\nI0604 23:40:07.107829 79 log.go:172] (0xc000a7c1e0) (1) Data frame handling\nI0604 23:40:07.107840 79 log.go:172] (0xc000a7c1e0) (1) Data frame sent\nI0604 23:40:07.107850 79 log.go:172] (0xc00096d1e0) (0xc000a7c1e0) Stream removed, broadcasting: 1\nI0604 23:40:07.107945 79 log.go:172] (0xc00096d1e0) Go away received\nI0604 23:40:07.108118 79 log.go:172] (0xc00096d1e0) (0xc000a7c1e0) Stream removed, broadcasting: 1\nI0604 23:40:07.108134 79 log.go:172] (0xc00096d1e0) (0xc0006cad20) Stream removed, broadcasting: 3\nI0604 23:40:07.108143 79 log.go:172] (0xc00096d1e0) (0xc000580460) Stream removed, broadcasting: 5\n" Jun 4 23:40:07.112: INFO: stdout: "" Jun 4 23:40:07.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c nc -zv -t -w 2 10.110.32.7 80' Jun 4 23:40:07.345: INFO: stderr: "I0604 23:40:07.252857 98 log.go:172] (0xc00003bc30) (0xc000151360) Create stream\nI0604 23:40:07.252919 98 log.go:172] (0xc00003bc30) (0xc000151360) Stream added, broadcasting: 1\nI0604 23:40:07.255873 98 log.go:172] (0xc00003bc30) Reply frame received for 1\nI0604 23:40:07.255916 98 log.go:172] (0xc00003bc30) (0xc000151900) Create stream\nI0604 23:40:07.255929 98 log.go:172] (0xc00003bc30) (0xc000151900) Stream added, broadcasting: 3\nI0604 23:40:07.257094 98 log.go:172] (0xc00003bc30) Reply frame received for 3\nI0604 23:40:07.257340 98 log.go:172] (0xc00003bc30) (0xc0004c9e00) Create stream\nI0604 23:40:07.257362 98 log.go:172] (0xc00003bc30) (0xc0004c9e00) Stream added, broadcasting: 5\nI0604 23:40:07.258630 98 log.go:172] (0xc00003bc30) Reply frame received for 5\nI0604 23:40:07.338328 98 log.go:172] (0xc00003bc30) Data frame received for 5\nI0604 23:40:07.338371 98 log.go:172] (0xc00003bc30) Data frame received for 3\nI0604 23:40:07.338414 98 log.go:172] (0xc000151900) (3) Data frame handling\nI0604 23:40:07.338448 98 log.go:172] (0xc0004c9e00) (5) Data frame handling\nI0604 23:40:07.338472 98 log.go:172] (0xc0004c9e00) (5) Data frame sent\nI0604 23:40:07.338490 98 log.go:172] (0xc00003bc30) Data frame received for 5\nI0604 23:40:07.338509 98 log.go:172] (0xc0004c9e00) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.32.7 80\nConnection to 10.110.32.7 80 port [tcp/http] succeeded!\nI0604 23:40:07.339927 98 log.go:172] (0xc00003bc30) Data frame received for 1\nI0604 23:40:07.339963 98 log.go:172] (0xc000151360) (1) Data frame handling\nI0604 23:40:07.339991 98 log.go:172] (0xc000151360) (1) Data frame sent\nI0604 23:40:07.340017 98 log.go:172] (0xc00003bc30) (0xc000151360) Stream removed, broadcasting: 1\nI0604 23:40:07.340040 98 log.go:172] (0xc00003bc30) Go away received\nI0604 23:40:07.340357 98 log.go:172] (0xc00003bc30) (0xc000151360) Stream removed, broadcasting: 1\nI0604 23:40:07.340373 98 log.go:172] (0xc00003bc30) (0xc000151900) Stream removed, broadcasting: 3\nI0604 23:40:07.340383 98 log.go:172] (0xc00003bc30) (0xc0004c9e00) Stream removed, broadcasting: 5\n" Jun 4 23:40:07.345: INFO: stdout: "" Jun 4 23:40:07.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31299' Jun 4 23:40:07.574: INFO: stderr: "I0604 23:40:07.488444 119 log.go:172] (0xc0009ab340) (0xc000ab8820) Create stream\nI0604 23:40:07.488497 119 log.go:172] (0xc0009ab340) (0xc000ab8820) Stream added, broadcasting: 1\nI0604 23:40:07.492916 119 log.go:172] (0xc0009ab340) Reply frame received for 1\nI0604 23:40:07.492956 119 log.go:172] (0xc0009ab340) (0xc0006bc140) Create stream\nI0604 23:40:07.492967 119 log.go:172] (0xc0009ab340) (0xc0006bc140) Stream added, broadcasting: 3\nI0604 23:40:07.493809 119 log.go:172] (0xc0009ab340) Reply frame received for 3\nI0604 23:40:07.493840 119 log.go:172] (0xc0009ab340) (0xc0006bd0e0) Create stream\nI0604 23:40:07.493849 119 log.go:172] (0xc0009ab340) (0xc0006bd0e0) Stream added, broadcasting: 5\nI0604 23:40:07.494551 119 log.go:172] (0xc0009ab340) Reply frame received for 5\nI0604 23:40:07.567486 119 log.go:172] (0xc0009ab340) Data frame received for 3\nI0604 23:40:07.567552 119 log.go:172] (0xc0006bc140) (3) Data frame handling\nI0604 23:40:07.567586 119 log.go:172] (0xc0009ab340) Data frame received for 5\nI0604 23:40:07.567625 119 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0604 23:40:07.567703 119 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0604 23:40:07.567732 119 log.go:172] (0xc0009ab340) Data frame received for 5\nI0604 23:40:07.567749 119 log.go:172] (0xc0006bd0e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31299\nConnection to 172.17.0.13 31299 port [tcp/31299] succeeded!\nI0604 23:40:07.568925 119 log.go:172] (0xc0009ab340) Data frame received for 1\nI0604 23:40:07.568955 119 log.go:172] (0xc000ab8820) (1) Data frame handling\nI0604 23:40:07.568970 119 log.go:172] (0xc000ab8820) (1) Data frame sent\nI0604 23:40:07.568995 119 log.go:172] (0xc0009ab340) (0xc000ab8820) Stream removed, broadcasting: 1\nI0604 23:40:07.569015 119 log.go:172] (0xc0009ab340) Go away received\nI0604 23:40:07.569549 119 log.go:172] (0xc0009ab340) (0xc000ab8820) Stream removed, broadcasting: 1\nI0604 23:40:07.569567 119 log.go:172] (0xc0009ab340) (0xc0006bc140) Stream removed, broadcasting: 3\nI0604 23:40:07.569576 119 log.go:172] (0xc0009ab340) (0xc0006bd0e0) Stream removed, broadcasting: 5\n" Jun 4 23:40:07.574: INFO: stdout: "" Jun 4 23:40:07.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31299' Jun 4 23:40:07.772: INFO: stderr: "I0604 23:40:07.714047 140 log.go:172] (0xc00003b6b0) (0xc000ad8780) Create stream\nI0604 23:40:07.714109 140 log.go:172] (0xc00003b6b0) (0xc000ad8780) Stream added, broadcasting: 1\nI0604 23:40:07.718631 140 log.go:172] (0xc00003b6b0) Reply frame received for 1\nI0604 23:40:07.718699 140 log.go:172] (0xc00003b6b0) (0xc00060cdc0) Create stream\nI0604 23:40:07.718723 140 log.go:172] (0xc00003b6b0) (0xc00060cdc0) Stream added, broadcasting: 3\nI0604 23:40:07.719556 140 log.go:172] (0xc00003b6b0) Reply frame received for 3\nI0604 23:40:07.719587 140 log.go:172] (0xc00003b6b0) (0xc000366c80) Create stream\nI0604 23:40:07.719602 140 log.go:172] (0xc00003b6b0) (0xc000366c80) Stream added, broadcasting: 5\nI0604 23:40:07.720300 140 log.go:172] (0xc00003b6b0) Reply frame received for 5\nI0604 23:40:07.764544 140 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0604 23:40:07.764574 140 log.go:172] (0xc000366c80) (5) Data frame handling\nI0604 23:40:07.764594 140 log.go:172] (0xc000366c80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31299\nConnection to 172.17.0.12 31299 port [tcp/31299] succeeded!\nI0604 23:40:07.764923 140 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0604 23:40:07.764976 140 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0604 23:40:07.765033 140 log.go:172] (0xc000366c80) (5) Data frame handling\nI0604 23:40:07.765072 140 log.go:172] (0xc00060cdc0) (3) Data frame handling\nI0604 23:40:07.766632 140 log.go:172] (0xc00003b6b0) Data frame received for 1\nI0604 23:40:07.766673 140 log.go:172] (0xc000ad8780) (1) Data frame handling\nI0604 23:40:07.766705 140 log.go:172] (0xc000ad8780) (1) Data frame sent\nI0604 23:40:07.766808 140 log.go:172] (0xc00003b6b0) (0xc000ad8780) Stream removed, broadcasting: 1\nI0604 23:40:07.766919 140 log.go:172] (0xc00003b6b0) Go away received\nI0604 23:40:07.767165 140 log.go:172] (0xc00003b6b0) (0xc000ad8780) Stream removed, broadcasting: 1\nI0604 23:40:07.767184 140 log.go:172] (0xc00003b6b0) (0xc00060cdc0) Stream removed, broadcasting: 3\nI0604 23:40:07.767192 140 log.go:172] (0xc00003b6b0) (0xc000366c80) Stream removed, broadcasting: 5\n" Jun 4 23:40:07.773: INFO: stdout: "" Jun 4 23:40:07.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31299/ ; done' Jun 4 23:40:08.182: INFO: stderr: "I0604 23:40:07.929569 161 log.go:172] (0xc000add340) (0xc000b241e0) Create stream\nI0604 23:40:07.929645 161 log.go:172] (0xc000add340) (0xc000b241e0) Stream added, broadcasting: 1\nI0604 23:40:07.935391 161 log.go:172] (0xc000add340) Reply frame received for 1\nI0604 23:40:07.935446 161 log.go:172] (0xc000add340) (0xc00085a640) Create stream\nI0604 23:40:07.935460 161 log.go:172] (0xc000add340) (0xc00085a640) Stream added, broadcasting: 3\nI0604 23:40:07.936430 161 log.go:172] (0xc000add340) Reply frame received for 3\nI0604 23:40:07.936456 161 log.go:172] (0xc000add340) (0xc0007085a0) Create stream\nI0604 23:40:07.936463 161 log.go:172] (0xc000add340) (0xc0007085a0) Stream added, broadcasting: 5\nI0604 23:40:07.937807 161 log.go:172] (0xc000add340) Reply frame received for 5\nI0604 23:40:08.003899 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.003937 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.003945 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.003968 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.003990 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.004015 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.087388 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.087444 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.087497 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.088020 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.088047 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.088077 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.088122 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.088139 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.088157 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.094888 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.094912 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.094930 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.095434 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.095457 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.095472 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.095508 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.095531 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.095549 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.102998 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.103027 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.103039 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.103943 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.103976 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.103988 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.104008 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.104019 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.104036 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.110110 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.110124 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.110135 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.110645 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.110684 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.110705 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.110737 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.110750 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.110764 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.114358 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.114377 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.114389 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.114915 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.114928 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.114935 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.114951 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.114972 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.114991 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.119099 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.119114 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.119130 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.119511 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.119540 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.119576 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.119600 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.119622 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.119631 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.123821 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.123842 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.123852 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.124213 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.124251 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.124273 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.124294 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.124316 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.124345 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.128179 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.128209 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.128239 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.128428 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.128446 161 log.go:172] (0xc0007085a0) (5) Data frame handling\n+ echo\n+ curlI0604 23:40:08.128464 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.128502 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.128523 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.128540 161 log.go:172] (0xc0007085a0) (5) Data frame sent\nI0604 23:40:08.128562 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.128576 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.128595 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.132037 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.132069 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.132085 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.132528 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.132546 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.132559 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.132577 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.132588 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.132605 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.140153 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.140173 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.140189 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.141466 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.141508 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.141545 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.141842 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.141862 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.141892 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.148180 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.148209 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.148230 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.148808 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.148824 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.148834 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ I0604 23:40:08.148890 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.148900 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.148909 161 log.go:172] (0xc0007085a0) (5) Data frame sent\necho\nI0604 23:40:08.148956 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.148967 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.148976 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.149024 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.149035 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.149044 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ curlI0604 23:40:08.149708 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.149743 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.149761 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.153907 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.153919 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.153933 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.154357 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.154367 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.154373 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.154386 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.154403 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.154424 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.159001 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.159020 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.159036 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.159463 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.159481 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.159519 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.159565 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.159576 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.159581 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.163412 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.163427 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.163439 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.163936 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.163966 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.163975 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.163986 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.163991 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.163997 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.167964 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.167980 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.168004 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.168341 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.168363 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.168381 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.168389 161 log.go:172] (0xc0007085a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.168401 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.168412 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.172306 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.172333 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.172358 161 log.go:172] (0xc00085a640) (3) Data frame sent\nI0604 23:40:08.172994 161 log.go:172] (0xc000add340) Data frame received for 3\nI0604 23:40:08.173010 161 log.go:172] (0xc00085a640) (3) Data frame handling\nI0604 23:40:08.173072 161 log.go:172] (0xc000add340) Data frame received for 5\nI0604 23:40:08.173085 161 log.go:172] (0xc0007085a0) (5) Data frame handling\nI0604 23:40:08.175011 161 log.go:172] (0xc000add340) Data frame received for 1\nI0604 23:40:08.175034 161 log.go:172] (0xc000b241e0) (1) Data frame handling\nI0604 23:40:08.175066 161 log.go:172] (0xc000b241e0) (1) Data frame sent\nI0604 23:40:08.175090 161 log.go:172] (0xc000add340) (0xc000b241e0) Stream removed, broadcasting: 1\nI0604 23:40:08.175111 161 log.go:172] (0xc000add340) Go away received\nI0604 23:40:08.175593 161 log.go:172] (0xc000add340) (0xc000b241e0) Stream removed, broadcasting: 1\nI0604 23:40:08.175614 161 log.go:172] (0xc000add340) (0xc00085a640) Stream removed, broadcasting: 3\nI0604 23:40:08.175625 161 log.go:172] (0xc000add340) (0xc0007085a0) Stream removed, broadcasting: 5\n" Jun 4 23:40:08.183: INFO: stdout: "\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q\naffinity-nodeport-timeout-8v27q" Jun 4 23:40:08.183: INFO: Received response from host: Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Received response from host: affinity-nodeport-timeout-8v27q Jun 4 23:40:08.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31299/' Jun 4 23:40:08.405: INFO: stderr: "I0604 23:40:08.332402 183 log.go:172] (0xc00003ad10) (0xc0005421e0) Create stream\nI0604 23:40:08.332470 183 log.go:172] (0xc00003ad10) (0xc0005421e0) Stream added, broadcasting: 1\nI0604 23:40:08.335549 183 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0604 23:40:08.335605 183 log.go:172] (0xc00003ad10) (0xc00051e140) Create stream\nI0604 23:40:08.335625 183 log.go:172] (0xc00003ad10) (0xc00051e140) Stream added, broadcasting: 3\nI0604 23:40:08.336553 183 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0604 23:40:08.336587 183 log.go:172] (0xc00003ad10) (0xc00044ad20) Create stream\nI0604 23:40:08.336594 183 log.go:172] (0xc00003ad10) (0xc00044ad20) Stream added, broadcasting: 5\nI0604 23:40:08.338079 183 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0604 23:40:08.394485 183 log.go:172] (0xc00003ad10) Data frame received for 5\nI0604 23:40:08.394521 183 log.go:172] (0xc00044ad20) (5) Data frame handling\nI0604 23:40:08.394546 183 log.go:172] (0xc00044ad20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:08.396759 183 log.go:172] (0xc00003ad10) Data frame received for 3\nI0604 23:40:08.396780 183 log.go:172] (0xc00051e140) (3) Data frame handling\nI0604 23:40:08.396789 183 log.go:172] (0xc00051e140) (3) Data frame sent\nI0604 23:40:08.397404 183 log.go:172] (0xc00003ad10) Data frame received for 5\nI0604 23:40:08.397434 183 log.go:172] (0xc00044ad20) (5) Data frame handling\nI0604 23:40:08.397476 183 log.go:172] (0xc00003ad10) Data frame received for 3\nI0604 23:40:08.397484 183 log.go:172] (0xc00051e140) (3) Data frame handling\nI0604 23:40:08.399381 183 log.go:172] (0xc00003ad10) Data frame received for 1\nI0604 23:40:08.399397 183 log.go:172] (0xc0005421e0) (1) Data frame handling\nI0604 23:40:08.399403 183 log.go:172] (0xc0005421e0) (1) Data frame sent\nI0604 23:40:08.399410 183 log.go:172] (0xc00003ad10) (0xc0005421e0) Stream removed, broadcasting: 1\nI0604 23:40:08.399431 183 log.go:172] (0xc00003ad10) Go away received\nI0604 23:40:08.399626 183 log.go:172] (0xc00003ad10) (0xc0005421e0) Stream removed, broadcasting: 1\nI0604 23:40:08.399636 183 log.go:172] (0xc00003ad10) (0xc00051e140) Stream removed, broadcasting: 3\nI0604 23:40:08.399642 183 log.go:172] (0xc00003ad10) (0xc00044ad20) Stream removed, broadcasting: 5\n" Jun 4 23:40:08.405: INFO: stdout: "affinity-nodeport-timeout-8v27q" Jun 4 23:40:23.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31299/' Jun 4 23:40:23.629: INFO: stderr: "I0604 23:40:23.539429 204 log.go:172] (0xc00003bc30) (0xc000af2780) Create stream\nI0604 23:40:23.539491 204 log.go:172] (0xc00003bc30) (0xc000af2780) Stream added, broadcasting: 1\nI0604 23:40:23.544316 204 log.go:172] (0xc00003bc30) Reply frame received for 1\nI0604 23:40:23.544350 204 log.go:172] (0xc00003bc30) (0xc0005be280) Create stream\nI0604 23:40:23.544358 204 log.go:172] (0xc00003bc30) (0xc0005be280) Stream added, broadcasting: 3\nI0604 23:40:23.545488 204 log.go:172] (0xc00003bc30) Reply frame received for 3\nI0604 23:40:23.545525 204 log.go:172] (0xc00003bc30) (0xc000542dc0) Create stream\nI0604 23:40:23.545537 204 log.go:172] (0xc00003bc30) (0xc000542dc0) Stream added, broadcasting: 5\nI0604 23:40:23.546537 204 log.go:172] (0xc00003bc30) Reply frame received for 5\nI0604 23:40:23.618580 204 log.go:172] (0xc00003bc30) Data frame received for 5\nI0604 23:40:23.618620 204 log.go:172] (0xc000542dc0) (5) Data frame handling\nI0604 23:40:23.618643 204 log.go:172] (0xc000542dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:23.620129 204 log.go:172] (0xc00003bc30) Data frame received for 3\nI0604 23:40:23.620156 204 log.go:172] (0xc0005be280) (3) Data frame handling\nI0604 23:40:23.620175 204 log.go:172] (0xc0005be280) (3) Data frame sent\nI0604 23:40:23.620663 204 log.go:172] (0xc00003bc30) Data frame received for 3\nI0604 23:40:23.620696 204 log.go:172] (0xc0005be280) (3) Data frame handling\nI0604 23:40:23.620721 204 log.go:172] (0xc00003bc30) Data frame received for 5\nI0604 23:40:23.620733 204 log.go:172] (0xc000542dc0) (5) Data frame handling\nI0604 23:40:23.622392 204 log.go:172] (0xc00003bc30) Data frame received for 1\nI0604 23:40:23.622417 204 log.go:172] (0xc000af2780) (1) Data frame handling\nI0604 23:40:23.622436 204 log.go:172] (0xc000af2780) (1) Data frame sent\nI0604 23:40:23.622455 204 log.go:172] (0xc00003bc30) (0xc000af2780) Stream removed, broadcasting: 1\nI0604 23:40:23.622471 204 log.go:172] (0xc00003bc30) Go away received\nI0604 23:40:23.622917 204 log.go:172] (0xc00003bc30) (0xc000af2780) Stream removed, broadcasting: 1\nI0604 23:40:23.622950 204 log.go:172] (0xc00003bc30) (0xc0005be280) Stream removed, broadcasting: 3\nI0604 23:40:23.622970 204 log.go:172] (0xc00003bc30) (0xc000542dc0) Stream removed, broadcasting: 5\n" Jun 4 23:40:23.629: INFO: stdout: "affinity-nodeport-timeout-8v27q" Jun 4 23:40:38.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3061 execpod-affinitycn4j2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31299/' Jun 4 23:40:38.873: INFO: stderr: "I0604 23:40:38.770773 225 log.go:172] (0xc000b97550) (0xc000ac6500) Create stream\nI0604 23:40:38.770861 225 log.go:172] (0xc000b97550) (0xc000ac6500) Stream added, broadcasting: 1\nI0604 23:40:38.775025 225 log.go:172] (0xc000b97550) Reply frame received for 1\nI0604 23:40:38.775080 225 log.go:172] (0xc000b97550) (0xc000576000) Create stream\nI0604 23:40:38.775095 225 log.go:172] (0xc000b97550) (0xc000576000) Stream added, broadcasting: 3\nI0604 23:40:38.776223 225 log.go:172] (0xc000b97550) Reply frame received for 3\nI0604 23:40:38.776258 225 log.go:172] (0xc000b97550) (0xc00054ebe0) Create stream\nI0604 23:40:38.776269 225 log.go:172] (0xc000b97550) (0xc00054ebe0) Stream added, broadcasting: 5\nI0604 23:40:38.777050 225 log.go:172] (0xc000b97550) Reply frame received for 5\nI0604 23:40:38.859697 225 log.go:172] (0xc000b97550) Data frame received for 5\nI0604 23:40:38.859729 225 log.go:172] (0xc00054ebe0) (5) Data frame handling\nI0604 23:40:38.859750 225 log.go:172] (0xc00054ebe0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31299/\nI0604 23:40:38.864034 225 log.go:172] (0xc000b97550) Data frame received for 3\nI0604 23:40:38.864061 225 log.go:172] (0xc000576000) (3) Data frame handling\nI0604 23:40:38.864083 225 log.go:172] (0xc000576000) (3) Data frame sent\nI0604 23:40:38.864642 225 log.go:172] (0xc000b97550) Data frame received for 3\nI0604 23:40:38.864658 225 log.go:172] (0xc000576000) (3) Data frame handling\nI0604 23:40:38.864674 225 log.go:172] (0xc000b97550) Data frame received for 5\nI0604 23:40:38.864682 225 log.go:172] (0xc00054ebe0) (5) Data frame handling\nI0604 23:40:38.866660 225 log.go:172] (0xc000b97550) Data frame received for 1\nI0604 23:40:38.866682 225 log.go:172] (0xc000ac6500) (1) Data frame handling\nI0604 23:40:38.866704 225 log.go:172] (0xc000ac6500) (1) Data frame sent\nI0604 23:40:38.866721 225 log.go:172] (0xc000b97550) (0xc000ac6500) Stream removed, broadcasting: 1\nI0604 23:40:38.866738 225 log.go:172] (0xc000b97550) Go away received\nI0604 23:40:38.867134 225 log.go:172] (0xc000b97550) (0xc000ac6500) Stream removed, broadcasting: 1\nI0604 23:40:38.867157 225 log.go:172] (0xc000b97550) (0xc000576000) Stream removed, broadcasting: 3\nI0604 23:40:38.867169 225 log.go:172] (0xc000b97550) (0xc00054ebe0) Stream removed, broadcasting: 5\n" Jun 4 23:40:38.874: INFO: stdout: "affinity-nodeport-timeout-72dd4" Jun 4 23:40:38.874: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-3061, will wait for the garbage collector to delete the pods Jun 4 23:40:38.996: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.04725ms Jun 4 23:40:39.397: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.507764ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:40:55.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3061" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:77.273 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":5,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:40:55.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:12.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8153" for this suite. • [SLOW TEST:17.163 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":6,"skipped":140,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:12.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 4 23:41:12.731: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3622 /api/v1/namespaces/watch-3622/configmaps/e2e-watch-test-resource-version c145d588-4549-4952-be17-ff31eef53422 10323284 0 2020-06-04 23:41:12 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-04 23:41:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 4 23:41:12.746: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3622 /api/v1/namespaces/watch-3622/configmaps/e2e-watch-test-resource-version c145d588-4549-4952-be17-ff31eef53422 10323286 0 2020-06-04 23:41:12 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-04 23:41:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:12.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3622" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":7,"skipped":142,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:12.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:12.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2495" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":8,"skipped":157,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:12.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:41:13.603: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:41:15.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 23:41:17.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910873, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:41:20.651: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:30.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1670" for this suite. STEP: Destroying namespace "webhook-1670-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.928 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":9,"skipped":162,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:30.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:41:31.042: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:35.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8380" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":178,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:35.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:41:35.949: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:41:37.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910895, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910895, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910896, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910895, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:41:41.073: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:41.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8330" for this suite. STEP: Destroying namespace "webhook-8330-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.246 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":11,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:41.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:41:41.586: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b74a751c-6d9c-428e-b52b-e39606557951" in namespace "security-context-test-3571" to be "Succeeded or Failed" Jun 4 23:41:41.593: INFO: Pod "alpine-nnp-false-b74a751c-6d9c-428e-b52b-e39606557951": Phase="Pending", Reason="", readiness=false. Elapsed: 7.036166ms Jun 4 23:41:43.604: INFO: Pod "alpine-nnp-false-b74a751c-6d9c-428e-b52b-e39606557951": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018352243s Jun 4 23:41:45.608: INFO: Pod "alpine-nnp-false-b74a751c-6d9c-428e-b52b-e39606557951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021875348s Jun 4 23:41:45.608: INFO: Pod "alpine-nnp-false-b74a751c-6d9c-428e-b52b-e39606557951" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:45.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3571" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:45.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 4 23:41:45.752: INFO: Waiting up to 5m0s for pod "pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4" in namespace "emptydir-5486" to be "Succeeded or Failed" Jun 4 23:41:45.771: INFO: Pod "pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.929127ms Jun 4 23:41:47.778: INFO: Pod "pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026194469s Jun 4 23:41:49.783: INFO: Pod "pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030984676s STEP: Saw pod success Jun 4 23:41:49.783: INFO: Pod "pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4" satisfied condition "Succeeded or Failed" Jun 4 23:41:49.787: INFO: Trying to get logs from node latest-worker pod pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4 container test-container: STEP: delete the pod Jun 4 23:41:49.859: INFO: Waiting for pod pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4 to disappear Jun 4 23:41:49.867: INFO: Pod pod-e71ea9d6-345e-47dc-b0e9-77a769ce0aa4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:49.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5486" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":264,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:49.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 4 23:41:49.936: INFO: Waiting up to 5m0s for pod "downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21" in namespace "downward-api-7247" to be "Succeeded or Failed" Jun 4 23:41:49.939: INFO: Pod "downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21": Phase="Pending", Reason="", readiness=false. Elapsed: 3.154012ms Jun 4 23:41:52.079: INFO: Pod "downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143406198s Jun 4 23:41:54.083: INFO: Pod "downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147197254s STEP: Saw pod success Jun 4 23:41:54.083: INFO: Pod "downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21" satisfied condition "Succeeded or Failed" Jun 4 23:41:54.086: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21 container client-container: STEP: delete the pod Jun 4 23:41:54.116: INFO: Waiting for pod downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21 to disappear Jun 4 23:41:54.119: INFO: Pod downwardapi-volume-957f8948-bd82-4da3-88ea-c7c9066f2e21 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:41:54.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7247" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":278,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:41:54.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode Jun 4 23:41:54.220: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5574" to be "Succeeded or Failed" Jun 4 23:41:54.227: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639875ms Jun 4 23:41:56.232: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011307712s Jun 4 23:41:58.237: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016253351s Jun 4 23:42:00.241: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021095765s STEP: Saw pod success Jun 4 23:42:00.241: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 4 23:42:00.244: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 4 23:42:00.296: INFO: Waiting for pod pod-host-path-test to disappear Jun 4 23:42:00.316: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:42:00.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5574" for this suite. • [SLOW TEST:6.200 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":294,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:42:00.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Jun 4 23:42:00.391: INFO: Waiting up to 5m0s for pod "var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758" in namespace "var-expansion-2106" to be "Succeeded or Failed" Jun 4 23:42:00.414: INFO: Pod "var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758": Phase="Pending", Reason="", readiness=false. Elapsed: 23.000865ms Jun 4 23:42:02.418: INFO: Pod "var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027198136s Jun 4 23:42:04.422: INFO: Pod "var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03099566s STEP: Saw pod success Jun 4 23:42:04.422: INFO: Pod "var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758" satisfied condition "Succeeded or Failed" Jun 4 23:42:04.427: INFO: Trying to get logs from node latest-worker pod var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758 container dapi-container: STEP: delete the pod Jun 4 23:42:04.535: INFO: Waiting for pod var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758 to disappear Jun 4 23:42:04.568: INFO: Pod var-expansion-bda76459-4ec8-4842-aa06-61c97ea69758 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:42:04.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2106" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":16,"skipped":309,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:42:04.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:42:05.306: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:42:07.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910925, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910925, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910925, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910925, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:42:10.358: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:42:10.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-568" for this suite. STEP: Destroying namespace "webhook-568-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.920 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":17,"skipped":325,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:42:10.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 4 23:42:10.534: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 4 23:42:10.768: INFO: Waiting for terminating namespaces to be deleted... Jun 4 23:42:10.865: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 4 23:42:10.902: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.902: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 4 23:42:10.902: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.902: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 4 23:42:10.902: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.902: INFO: Container kindnet-cni ready: true, restart count 2 Jun 4 23:42:10.902: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.902: INFO: Container kube-proxy ready: true, restart count 0 Jun 4 23:42:10.902: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 4 23:42:10.909: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.909: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 4 23:42:10.909: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.909: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 4 23:42:10.909: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.909: INFO: Container kindnet-cni ready: true, restart count 2 Jun 4 23:42:10.909: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.909: INFO: Container kube-proxy ready: true, restart count 0 Jun 4 23:42:10.909: INFO: pod-exec-websocket-c74e3c1b-878b-4727-a52c-3a772cfc8515 from pods-8380 started at 2020-06-04 23:41:31 +0000 UTC (1 container statuses recorded) Jun 4 23:42:10.909: INFO: Container main ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5569596f-6279-47d4-8536-77382d34df2b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5569596f-6279-47d4-8536-77382d34df2b off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5569596f-6279-47d4-8536-77382d34df2b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:42:21.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7971" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.694 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":18,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:42:21.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 4 23:42:21.289: INFO: Waiting up to 5m0s for pod "downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5" in namespace "downward-api-4451" to be "Succeeded or Failed" Jun 4 23:42:21.292: INFO: Pod "downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.73587ms Jun 4 23:42:23.296: INFO: Pod "downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00709691s Jun 4 23:42:25.300: INFO: Pod "downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5": Phase="Running", Reason="", readiness=true. Elapsed: 4.011175905s Jun 4 23:42:27.303: INFO: Pod "downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014246441s STEP: Saw pod success Jun 4 23:42:27.303: INFO: Pod "downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5" satisfied condition "Succeeded or Failed" Jun 4 23:42:27.306: INFO: Trying to get logs from node latest-worker2 pod downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5 container dapi-container: STEP: delete the pod Jun 4 23:42:27.355: INFO: Waiting for pod downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5 to disappear Jun 4 23:42:27.396: INFO: Pod downward-api-0cb587f6-9fc3-4eae-b6cb-c65af9535ec5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:42:27.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4451" for this suite. • [SLOW TEST:6.225 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":377,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:42:27.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:42:28.380: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:42:30.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 23:42:32.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910948, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:42:35.463: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:42:47.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1670" for this suite. STEP: Destroying namespace "webhook-1670-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.299 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":20,"skipped":377,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:42:47.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:42:48.679: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:42:50.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910968, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910968, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910968, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726910968, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:42:53.719: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 4 23:42:57.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-1713 to-be-attached-pod -i -c=container1' Jun 4 23:42:57.933: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:42:57.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1713" for this suite. STEP: Destroying namespace "webhook-1713-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.355 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":21,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:42:58.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 4 23:42:58.184: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:43:03.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2558" for this suite. • [SLOW TEST:5.562 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":22,"skipped":402,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:43:03.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:43:03.773: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:43:04.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9436" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":23,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:43:04.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 Jun 4 23:43:04.685: INFO: Waiting up to 1m0s for all nodes to be ready Jun 4 23:44:04.715: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:44:04.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 4 23:44:08.851: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:44:23.207: INFO: pods created so far: [1 1 1] Jun 4 23:44:23.207: INFO: length of pods created so far: 3 Jun 4 23:44:37.217: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:44:44.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-9384" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:44:44.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8806" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:99.841 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":24,"skipped":449,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:44:44.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:45:08.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4545" for this suite. STEP: Destroying namespace "nsdeletetest-6610" for this suite. Jun 4 23:45:08.527: INFO: Namespace nsdeletetest-6610 was already deleted STEP: Destroying namespace "nsdeletetest-3911" for this suite. • [SLOW TEST:24.164 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":25,"skipped":449,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:45:08.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-7995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7995 to expose endpoints map[] Jun 4 23:45:08.691: INFO: successfully validated that service endpoint-test2 in namespace services-7995 exposes endpoints map[] (23.043695ms elapsed) STEP: Creating pod pod1 in namespace services-7995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7995 to expose endpoints map[pod1:[80]] Jun 4 23:45:12.146: INFO: successfully validated that service endpoint-test2 in namespace services-7995 exposes endpoints map[pod1:[80]] (3.431322264s elapsed) STEP: Creating pod pod2 in namespace services-7995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7995 to expose endpoints map[pod1:[80] pod2:[80]] Jun 4 23:45:15.466: INFO: successfully validated that service endpoint-test2 in namespace services-7995 exposes endpoints map[pod1:[80] pod2:[80]] (3.288214878s elapsed) STEP: Deleting pod pod1 in namespace services-7995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7995 to expose endpoints map[pod2:[80]] Jun 4 23:45:16.539: INFO: successfully validated that service endpoint-test2 in namespace services-7995 exposes endpoints map[pod2:[80]] (1.068523218s elapsed) STEP: Deleting pod pod2 in namespace services-7995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7995 to expose endpoints map[] Jun 4 23:45:17.577: INFO: successfully validated that service endpoint-test2 in namespace services-7995 exposes endpoints map[] (1.030644767s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:45:17.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7995" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.119 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":26,"skipped":458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:45:17.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-lmh9 STEP: Creating a pod to test atomic-volume-subpath Jun 4 23:45:17.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lmh9" in namespace "subpath-9374" to be "Succeeded or Failed" Jun 4 23:45:17.758: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.923752ms Jun 4 23:45:19.763: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02324455s Jun 4 23:45:21.768: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 4.0279218s Jun 4 23:45:23.772: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 6.03261341s Jun 4 23:45:25.777: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 8.037660692s Jun 4 23:45:27.782: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 10.042666503s Jun 4 23:45:29.787: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 12.047007008s Jun 4 23:45:31.791: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 14.051672419s Jun 4 23:45:33.796: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 16.055742667s Jun 4 23:45:35.800: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 18.060101812s Jun 4 23:45:37.805: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 20.065113759s Jun 4 23:45:39.810: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Running", Reason="", readiness=true. Elapsed: 22.069845578s Jun 4 23:45:41.827: INFO: Pod "pod-subpath-test-configmap-lmh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087216975s STEP: Saw pod success Jun 4 23:45:41.827: INFO: Pod "pod-subpath-test-configmap-lmh9" satisfied condition "Succeeded or Failed" Jun 4 23:45:41.830: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-lmh9 container test-container-subpath-configmap-lmh9: STEP: delete the pod Jun 4 23:45:41.861: INFO: Waiting for pod pod-subpath-test-configmap-lmh9 to disappear Jun 4 23:45:41.901: INFO: Pod pod-subpath-test-configmap-lmh9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lmh9 Jun 4 23:45:41.901: INFO: Deleting pod "pod-subpath-test-configmap-lmh9" in namespace "subpath-9374" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:45:41.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9374" for this suite. • [SLOW TEST:24.261 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":27,"skipped":513,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:45:41.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:45:53.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-798" for this suite. • [SLOW TEST:11.268 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":28,"skipped":519,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:45:53.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-6d0e11f3-2ee4-4d5a-b396-2724389bc94c STEP: Creating configMap with name cm-test-opt-upd-07e98cb2-c79a-4db6-9acf-59b462013537 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6d0e11f3-2ee4-4d5a-b396-2724389bc94c STEP: Updating configmap cm-test-opt-upd-07e98cb2-c79a-4db6-9acf-59b462013537 STEP: Creating configMap with name cm-test-opt-create-4641c860-e390-4b40-9116-09a7765f2302 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:46:01.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3801" for this suite. • [SLOW TEST:8.242 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":537,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:46:01.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:46:01.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4631" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":30,"skipped":542,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:46:01.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-65f9d2d2-c1ba-46b6-b474-a835d466f554 STEP: Creating a pod to test consume configMaps Jun 4 23:46:01.626: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121" in namespace "projected-7681" to be "Succeeded or Failed" Jun 4 23:46:01.646: INFO: Pod "pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121": Phase="Pending", Reason="", readiness=false. Elapsed: 19.77253ms Jun 4 23:46:03.650: INFO: Pod "pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023627033s Jun 4 23:46:05.654: INFO: Pod "pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027665636s STEP: Saw pod success Jun 4 23:46:05.654: INFO: Pod "pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121" satisfied condition "Succeeded or Failed" Jun 4 23:46:05.656: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121 container projected-configmap-volume-test: STEP: delete the pod Jun 4 23:46:05.734: INFO: Waiting for pod pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121 to disappear Jun 4 23:46:05.738: INFO: Pod pod-projected-configmaps-5c511499-53b5-491a-9ed9-1ef4b8cab121 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:46:05.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7681" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:46:05.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-51090172-fbaf-4144-8e40-135cd2a67daa STEP: Creating a pod to test consume configMaps Jun 4 23:46:05.895: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55" in namespace "projected-3893" to be "Succeeded or Failed" Jun 4 23:46:05.923: INFO: Pod "pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55": Phase="Pending", Reason="", readiness=false. Elapsed: 27.819369ms Jun 4 23:46:07.927: INFO: Pod "pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031825792s Jun 4 23:46:09.932: INFO: Pod "pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036467185s STEP: Saw pod success Jun 4 23:46:09.932: INFO: Pod "pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55" satisfied condition "Succeeded or Failed" Jun 4 23:46:09.936: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55 container projected-configmap-volume-test: STEP: delete the pod Jun 4 23:46:10.179: INFO: Waiting for pod pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55 to disappear Jun 4 23:46:10.189: INFO: Pod pod-projected-configmaps-ef63fc7a-9d58-4c19-8cd8-7984d708ac55 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:46:10.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3893" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":32,"skipped":591,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:46:10.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 4 23:46:10.247: INFO: PodSpec: initContainers in spec.initContainers Jun 4 23:47:02.896: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4ab005bf-43f0-4643-94bc-1321b1df0d51", GenerateName:"", Namespace:"init-container-9796", SelfLink:"/api/v1/namespaces/init-container-9796/pods/pod-init-4ab005bf-43f0-4643-94bc-1321b1df0d51", UID:"7a464840-4090-4184-aa68-8b7b91923826", ResourceVersion:"10325429", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726911170, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"247772616"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c1a060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c1a080)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c1a0a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c1a0c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qkmzb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0027ae140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qkmzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qkmzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qkmzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002cf42a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ffc000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002cf4330)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002cf43c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002cf43c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002cf43cc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911170, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911170, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911170, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911170, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.70", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.70"}}, StartTime:(*v1.Time)(0xc002c1a0e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ffc460)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ffc4d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://40ce92c86f64134c20bf8d7cf2b45ca4d9511118e099087d70b096e9d881c251", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c1a140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c1a100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002cf444f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:02.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9796" for this suite. • [SLOW TEST:52.771 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":33,"skipped":608,"failed":0} [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:02.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-9f19a9ab-e149-4dc7-8195-fc22ac01789b [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:03.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4314" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":34,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:03.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:47:03.481: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:47:05.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911223, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911223, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911223, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911223, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:47:08.521: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:09.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9469" for this suite. STEP: Destroying namespace "webhook-9469-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.463 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":35,"skipped":626,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:09.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 4 23:47:09.576: INFO: Waiting up to 5m0s for pod "pod-e753d883-6736-4fcc-8163-c9d761b4b181" in namespace "emptydir-255" to be "Succeeded or Failed" Jun 4 23:47:09.626: INFO: Pod "pod-e753d883-6736-4fcc-8163-c9d761b4b181": Phase="Pending", Reason="", readiness=false. Elapsed: 49.850477ms Jun 4 23:47:11.631: INFO: Pod "pod-e753d883-6736-4fcc-8163-c9d761b4b181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054344326s Jun 4 23:47:13.635: INFO: Pod "pod-e753d883-6736-4fcc-8163-c9d761b4b181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058345494s STEP: Saw pod success Jun 4 23:47:13.635: INFO: Pod "pod-e753d883-6736-4fcc-8163-c9d761b4b181" satisfied condition "Succeeded or Failed" Jun 4 23:47:13.638: INFO: Trying to get logs from node latest-worker2 pod pod-e753d883-6736-4fcc-8163-c9d761b4b181 container test-container: STEP: delete the pod Jun 4 23:47:13.803: INFO: Waiting for pod pod-e753d883-6736-4fcc-8163-c9d761b4b181 to disappear Jun 4 23:47:13.895: INFO: Pod pod-e753d883-6736-4fcc-8163-c9d761b4b181 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:13.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-255" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":36,"skipped":629,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:13.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0604 23:47:25.488159 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 4 23:47:25.488: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:25.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4868" for this suite. • [SLOW TEST:11.881 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":37,"skipped":639,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:25.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 4 23:47:26.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c" in namespace "downward-api-6091" to be "Succeeded or Failed" Jun 4 23:47:26.731: INFO: Pod "downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c": Phase="Pending", Reason="", readiness=false. Elapsed: 332.779371ms Jun 4 23:47:28.879: INFO: Pod "downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480057578s Jun 4 23:47:30.889: INFO: Pod "downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.490717539s STEP: Saw pod success Jun 4 23:47:30.889: INFO: Pod "downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c" satisfied condition "Succeeded or Failed" Jun 4 23:47:30.926: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c container client-container: STEP: delete the pod Jun 4 23:47:31.003: INFO: Waiting for pod downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c to disappear Jun 4 23:47:31.257: INFO: Pod downwardapi-volume-11598fc2-f00d-412d-94ca-a7798a74768c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:31.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6091" for this suite. • [SLOW TEST:5.552 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:31.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 4 23:47:32.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe" in namespace "projected-8951" to be "Succeeded or Failed" Jun 4 23:47:32.127: INFO: Pod "downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe": Phase="Pending", Reason="", readiness=false. Elapsed: 90.369211ms Jun 4 23:47:34.268: INFO: Pod "downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231374117s Jun 4 23:47:36.273: INFO: Pod "downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.236331187s STEP: Saw pod success Jun 4 23:47:36.273: INFO: Pod "downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe" satisfied condition "Succeeded or Failed" Jun 4 23:47:36.275: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe container client-container: STEP: delete the pod Jun 4 23:47:36.306: INFO: Waiting for pod downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe to disappear Jun 4 23:47:36.322: INFO: Pod downwardapi-volume-3436ab69-c257-4212-ad9b-5522930f11fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:36.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8951" for this suite. • [SLOW TEST:5.222 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":667,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:36.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jun 4 23:47:36.632: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:47:52.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6764" for this suite. • [SLOW TEST:15.602 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":40,"skipped":679,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:47:52.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-5fd827e5-f0f8-4b6b-bee7-74f3acca61f5 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5fd827e5-f0f8-4b6b-bee7-74f3acca61f5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:00.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8200" for this suite. • [SLOW TEST:8.175 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":701,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:00.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Jun 4 23:48:00.469: INFO: Waiting up to 5m0s for pod "client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba" in namespace "containers-1025" to be "Succeeded or Failed" Jun 4 23:48:00.537: INFO: Pod "client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba": Phase="Pending", Reason="", readiness=false. Elapsed: 68.117787ms Jun 4 23:48:02.541: INFO: Pod "client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072316764s Jun 4 23:48:04.567: INFO: Pod "client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09825436s STEP: Saw pod success Jun 4 23:48:04.567: INFO: Pod "client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba" satisfied condition "Succeeded or Failed" Jun 4 23:48:04.570: INFO: Trying to get logs from node latest-worker2 pod client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba container test-container: STEP: delete the pod Jun 4 23:48:04.584: INFO: Waiting for pod client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba to disappear Jun 4 23:48:04.605: INFO: Pod client-containers-765aab67-1437-4db0-bc01-1e31d3c380ba no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:04.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1025" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":42,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:04.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:48:04.739: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-708a93f6-4240-41fb-a790-175832f28afc" in namespace "security-context-test-2413" to be "Succeeded or Failed" Jun 4 23:48:04.793: INFO: Pod "busybox-readonly-false-708a93f6-4240-41fb-a790-175832f28afc": Phase="Pending", Reason="", readiness=false. Elapsed: 54.042714ms Jun 4 23:48:06.798: INFO: Pod "busybox-readonly-false-708a93f6-4240-41fb-a790-175832f28afc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058608545s Jun 4 23:48:08.803: INFO: Pod "busybox-readonly-false-708a93f6-4240-41fb-a790-175832f28afc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063200948s Jun 4 23:48:08.803: INFO: Pod "busybox-readonly-false-708a93f6-4240-41fb-a790-175832f28afc" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:08.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2413" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":722,"failed":0} S ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:08.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:09.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-142" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":44,"skipped":723,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:09.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 4 23:48:13.761: INFO: Successfully updated pod "annotationupdate30650e54-19d9-42a0-b3d3-f3c364b3186b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:15.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8393" for this suite. • [SLOW TEST:6.667 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":45,"skipped":727,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:15.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:48:15.863: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-041e34fd-30bc-4e77-97e2-6ce8e04c7e0d" in namespace "security-context-test-7369" to be "Succeeded or Failed" Jun 4 23:48:15.871: INFO: Pod "busybox-privileged-false-041e34fd-30bc-4e77-97e2-6ce8e04c7e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.653463ms Jun 4 23:48:17.932: INFO: Pod "busybox-privileged-false-041e34fd-30bc-4e77-97e2-6ce8e04c7e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068450791s Jun 4 23:48:19.936: INFO: Pod "busybox-privileged-false-041e34fd-30bc-4e77-97e2-6ce8e04c7e0d": Phase="Running", Reason="", readiness=true. Elapsed: 4.072537433s Jun 4 23:48:21.940: INFO: Pod "busybox-privileged-false-041e34fd-30bc-4e77-97e2-6ce8e04c7e0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07723864s Jun 4 23:48:21.940: INFO: Pod "busybox-privileged-false-041e34fd-30bc-4e77-97e2-6ce8e04c7e0d" satisfied condition "Succeeded or Failed" Jun 4 23:48:21.947: INFO: Got logs for pod "busybox-privileged-false-041e34fd-30bc-4e77-97e2-6ce8e04c7e0d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:21.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7369" for this suite. • [SLOW TEST:6.161 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":46,"skipped":742,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:21.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 4 23:48:28.605: INFO: Successfully updated pod "adopt-release-5bz66" STEP: Checking that the Job readopts the Pod Jun 4 23:48:28.605: INFO: Waiting up to 15m0s for pod "adopt-release-5bz66" in namespace "job-7640" to be "adopted" Jun 4 23:48:28.615: INFO: Pod "adopt-release-5bz66": Phase="Running", Reason="", readiness=true. Elapsed: 10.127091ms Jun 4 23:48:30.620: INFO: Pod "adopt-release-5bz66": Phase="Running", Reason="", readiness=true. Elapsed: 2.015214459s Jun 4 23:48:30.620: INFO: Pod "adopt-release-5bz66" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 4 23:48:31.131: INFO: Successfully updated pod "adopt-release-5bz66" STEP: Checking that the Job releases the Pod Jun 4 23:48:31.131: INFO: Waiting up to 15m0s for pod "adopt-release-5bz66" in namespace "job-7640" to be "released" Jun 4 23:48:31.184: INFO: Pod "adopt-release-5bz66": Phase="Running", Reason="", readiness=true. Elapsed: 52.527405ms Jun 4 23:48:33.445: INFO: Pod "adopt-release-5bz66": Phase="Running", Reason="", readiness=true. Elapsed: 2.313664042s Jun 4 23:48:33.445: INFO: Pod "adopt-release-5bz66" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:33.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7640" for this suite. • [SLOW TEST:11.667 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":47,"skipped":751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:33.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:38.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3364" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":48,"skipped":783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:38.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 4 23:48:38.174: INFO: Waiting up to 5m0s for pod "downward-api-4455c4fd-6b09-4111-9404-82dc7e419141" in namespace "downward-api-5497" to be "Succeeded or Failed" Jun 4 23:48:38.177: INFO: Pod "downward-api-4455c4fd-6b09-4111-9404-82dc7e419141": Phase="Pending", Reason="", readiness=false. Elapsed: 3.351741ms Jun 4 23:48:40.182: INFO: Pod "downward-api-4455c4fd-6b09-4111-9404-82dc7e419141": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00809436s Jun 4 23:48:42.186: INFO: Pod "downward-api-4455c4fd-6b09-4111-9404-82dc7e419141": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012381749s Jun 4 23:48:44.496: INFO: Pod "downward-api-4455c4fd-6b09-4111-9404-82dc7e419141": Phase="Running", Reason="", readiness=true. Elapsed: 6.321827935s Jun 4 23:48:46.500: INFO: Pod "downward-api-4455c4fd-6b09-4111-9404-82dc7e419141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.326243125s STEP: Saw pod success Jun 4 23:48:46.500: INFO: Pod "downward-api-4455c4fd-6b09-4111-9404-82dc7e419141" satisfied condition "Succeeded or Failed" Jun 4 23:48:46.504: INFO: Trying to get logs from node latest-worker pod downward-api-4455c4fd-6b09-4111-9404-82dc7e419141 container dapi-container: STEP: delete the pod Jun 4 23:48:46.605: INFO: Waiting for pod downward-api-4455c4fd-6b09-4111-9404-82dc7e419141 to disappear Jun 4 23:48:46.611: INFO: Pod downward-api-4455c4fd-6b09-4111-9404-82dc7e419141 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:48:46.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5497" for this suite. • [SLOW TEST:8.510 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":831,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:48:46.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:48:46.707: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 4 23:48:49.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 create -f -' Jun 4 23:48:54.397: INFO: stderr: "" Jun 4 23:48:54.397: INFO: stdout: "e2e-test-crd-publish-openapi-5980-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 4 23:48:54.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 delete e2e-test-crd-publish-openapi-5980-crds test-foo' Jun 4 23:48:54.512: INFO: stderr: "" Jun 4 23:48:54.512: INFO: stdout: "e2e-test-crd-publish-openapi-5980-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 4 23:48:54.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 apply -f -' Jun 4 23:48:57.197: INFO: stderr: "" Jun 4 23:48:57.197: INFO: stdout: "e2e-test-crd-publish-openapi-5980-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 4 23:48:57.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 delete e2e-test-crd-publish-openapi-5980-crds test-foo' Jun 4 23:48:57.296: INFO: stderr: "" Jun 4 23:48:57.296: INFO: stdout: "e2e-test-crd-publish-openapi-5980-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 4 23:48:57.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 create -f -' Jun 4 23:49:00.496: INFO: rc: 1 Jun 4 23:49:00.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 apply -f -' Jun 4 23:49:01.743: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 4 23:49:01.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 create -f -' Jun 4 23:49:01.964: INFO: rc: 1 Jun 4 23:49:01.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-523 apply -f -' Jun 4 23:49:02.229: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 4 23:49:02.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5980-crds' Jun 4 23:49:02.461: INFO: stderr: "" Jun 4 23:49:02.461: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5980-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 4 23:49:02.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5980-crds.metadata' Jun 4 23:49:02.690: INFO: stderr: "" Jun 4 23:49:02.690: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5980-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 4 23:49:02.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5980-crds.spec' Jun 4 23:49:02.958: INFO: stderr: "" Jun 4 23:49:02.958: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5980-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 4 23:49:02.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5980-crds.spec.bars' Jun 4 23:49:03.217: INFO: stderr: "" Jun 4 23:49:03.217: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5980-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 4 23:49:03.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5980-crds.spec.bars2' Jun 4 23:49:03.461: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:49:06.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-523" for this suite. • [SLOW TEST:19.783 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":50,"skipped":832,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:49:06.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:49:12.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9156" for this suite. STEP: Destroying namespace "nsdeletetest-1305" for this suite. Jun 4 23:49:12.791: INFO: Namespace nsdeletetest-1305 was already deleted STEP: Destroying namespace "nsdeletetest-9615" for this suite. • [SLOW TEST:6.392 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":51,"skipped":832,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:49:12.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:49:16.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5470" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":52,"skipped":841,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:49:16.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:49:33.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6285" for this suite. • [SLOW TEST:16.284 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":53,"skipped":881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:49:33.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:49:33.368: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:49:37.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7054" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:49:37.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:49:37.844: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:49:39.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911377, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911377, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911377, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911377, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:49:42.892: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:49:42.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6476-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:49:44.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1717" for this suite. STEP: Destroying namespace "webhook-1717-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.803 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":55,"skipped":941,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:49:44.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9168 Jun 4 23:49:50.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9168 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 4 23:49:50.756: INFO: stderr: "I0604 23:49:50.527102 560 log.go:172] (0xc000c380b0) (0xc0004bed20) Create stream\nI0604 23:49:50.527162 560 log.go:172] (0xc000c380b0) (0xc0004bed20) Stream added, broadcasting: 1\nI0604 23:49:50.529572 560 log.go:172] (0xc000c380b0) Reply frame received for 1\nI0604 23:49:50.529609 560 log.go:172] (0xc000c380b0) (0xc0000ddae0) Create stream\nI0604 23:49:50.529622 560 log.go:172] (0xc000c380b0) (0xc0000ddae0) Stream added, broadcasting: 3\nI0604 23:49:50.530556 560 log.go:172] (0xc000c380b0) Reply frame received for 3\nI0604 23:49:50.530583 560 log.go:172] (0xc000c380b0) (0xc0001390e0) Create stream\nI0604 23:49:50.530592 560 log.go:172] (0xc000c380b0) (0xc0001390e0) Stream added, broadcasting: 5\nI0604 23:49:50.531435 560 log.go:172] (0xc000c380b0) Reply frame received for 5\nI0604 23:49:50.657749 560 log.go:172] (0xc000c380b0) Data frame received for 5\nI0604 23:49:50.657799 560 log.go:172] (0xc0001390e0) (5) Data frame handling\nI0604 23:49:50.657820 560 log.go:172] (0xc0001390e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0604 23:49:50.746390 560 log.go:172] (0xc000c380b0) Data frame received for 3\nI0604 23:49:50.746440 560 log.go:172] (0xc0000ddae0) (3) Data frame handling\nI0604 23:49:50.746483 560 log.go:172] (0xc0000ddae0) (3) Data frame sent\nI0604 23:49:50.746849 560 log.go:172] (0xc000c380b0) Data frame received for 5\nI0604 23:49:50.746890 560 log.go:172] (0xc0001390e0) (5) Data frame handling\nI0604 23:49:50.746925 560 log.go:172] (0xc000c380b0) Data frame received for 3\nI0604 23:49:50.746944 560 log.go:172] (0xc0000ddae0) (3) Data frame handling\nI0604 23:49:50.749592 560 log.go:172] (0xc000c380b0) Data frame received for 1\nI0604 23:49:50.749617 560 log.go:172] (0xc0004bed20) (1) Data frame handling\nI0604 23:49:50.749631 560 log.go:172] (0xc0004bed20) (1) Data frame sent\nI0604 23:49:50.749650 560 log.go:172] (0xc000c380b0) (0xc0004bed20) Stream removed, broadcasting: 1\nI0604 23:49:50.749674 560 log.go:172] (0xc000c380b0) Go away received\nI0604 23:49:50.750009 560 log.go:172] (0xc000c380b0) (0xc0004bed20) Stream removed, broadcasting: 1\nI0604 23:49:50.750028 560 log.go:172] (0xc000c380b0) (0xc0000ddae0) Stream removed, broadcasting: 3\nI0604 23:49:50.750040 560 log.go:172] (0xc000c380b0) (0xc0001390e0) Stream removed, broadcasting: 5\n" Jun 4 23:49:50.756: INFO: stdout: "iptables" Jun 4 23:49:50.756: INFO: proxyMode: iptables Jun 4 23:49:50.762: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:49:50.782: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:49:52.782: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:49:52.787: INFO: Pod kube-proxy-mode-detector still exists Jun 4 23:49:54.782: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 4 23:49:54.787: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9168 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9168 I0604 23:49:54.832146 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9168, replica count: 3 I0604 23:49:57.882566 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0604 23:50:00.882829 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 4 23:50:00.889: INFO: Creating new exec pod Jun 4 23:50:05.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9168 execpod-affinity8c2xb -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jun 4 23:50:06.191: INFO: stderr: "I0604 23:50:06.060579 579 log.go:172] (0xc000b993f0) (0xc000b80500) Create stream\nI0604 23:50:06.060632 579 log.go:172] (0xc000b993f0) (0xc000b80500) Stream added, broadcasting: 1\nI0604 23:50:06.065711 579 log.go:172] (0xc000b993f0) Reply frame received for 1\nI0604 23:50:06.065757 579 log.go:172] (0xc000b993f0) (0xc000882f00) Create stream\nI0604 23:50:06.065771 579 log.go:172] (0xc000b993f0) (0xc000882f00) Stream added, broadcasting: 3\nI0604 23:50:06.066879 579 log.go:172] (0xc000b993f0) Reply frame received for 3\nI0604 23:50:06.066931 579 log.go:172] (0xc000b993f0) (0xc00087a640) Create stream\nI0604 23:50:06.066957 579 log.go:172] (0xc000b993f0) (0xc00087a640) Stream added, broadcasting: 5\nI0604 23:50:06.067855 579 log.go:172] (0xc000b993f0) Reply frame received for 5\nI0604 23:50:06.160334 579 log.go:172] (0xc000b993f0) Data frame received for 5\nI0604 23:50:06.160362 579 log.go:172] (0xc00087a640) (5) Data frame handling\nI0604 23:50:06.160375 579 log.go:172] (0xc00087a640) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0604 23:50:06.181847 579 log.go:172] (0xc000b993f0) Data frame received for 5\nI0604 23:50:06.181872 579 log.go:172] (0xc00087a640) (5) Data frame handling\nI0604 23:50:06.181888 579 log.go:172] (0xc00087a640) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0604 23:50:06.182191 579 log.go:172] (0xc000b993f0) Data frame received for 5\nI0604 23:50:06.182234 579 log.go:172] (0xc00087a640) (5) Data frame handling\nI0604 23:50:06.182267 579 log.go:172] (0xc000b993f0) Data frame received for 3\nI0604 23:50:06.182287 579 log.go:172] (0xc000882f00) (3) Data frame handling\nI0604 23:50:06.183896 579 log.go:172] (0xc000b993f0) Data frame received for 1\nI0604 23:50:06.183915 579 log.go:172] (0xc000b80500) (1) Data frame handling\nI0604 23:50:06.183934 579 log.go:172] (0xc000b80500) (1) Data frame sent\nI0604 23:50:06.183949 579 log.go:172] (0xc000b993f0) (0xc000b80500) Stream removed, broadcasting: 1\nI0604 23:50:06.183965 579 log.go:172] (0xc000b993f0) Go away received\nI0604 23:50:06.184271 579 log.go:172] (0xc000b993f0) (0xc000b80500) Stream removed, broadcasting: 1\nI0604 23:50:06.184285 579 log.go:172] (0xc000b993f0) (0xc000882f00) Stream removed, broadcasting: 3\nI0604 23:50:06.184293 579 log.go:172] (0xc000b993f0) (0xc00087a640) Stream removed, broadcasting: 5\n" Jun 4 23:50:06.192: INFO: stdout: "" Jun 4 23:50:06.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9168 execpod-affinity8c2xb -- /bin/sh -x -c nc -zv -t -w 2 10.110.162.16 80' Jun 4 23:50:06.410: INFO: stderr: "I0604 23:50:06.335555 599 log.go:172] (0xc000aff4a0) (0xc000b06640) Create stream\nI0604 23:50:06.335627 599 log.go:172] (0xc000aff4a0) (0xc000b06640) Stream added, broadcasting: 1\nI0604 23:50:06.339254 599 log.go:172] (0xc000aff4a0) Reply frame received for 1\nI0604 23:50:06.339304 599 log.go:172] (0xc000aff4a0) (0xc0000dd0e0) Create stream\nI0604 23:50:06.339321 599 log.go:172] (0xc000aff4a0) (0xc0000dd0e0) Stream added, broadcasting: 3\nI0604 23:50:06.340044 599 log.go:172] (0xc000aff4a0) Reply frame received for 3\nI0604 23:50:06.340070 599 log.go:172] (0xc000aff4a0) (0xc000b066e0) Create stream\nI0604 23:50:06.340079 599 log.go:172] (0xc000aff4a0) (0xc000b066e0) Stream added, broadcasting: 5\nI0604 23:50:06.340702 599 log.go:172] (0xc000aff4a0) Reply frame received for 5\nI0604 23:50:06.400592 599 log.go:172] (0xc000aff4a0) Data frame received for 5\nI0604 23:50:06.400626 599 log.go:172] (0xc000b066e0) (5) Data frame handling\nI0604 23:50:06.400650 599 log.go:172] (0xc000b066e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.110.162.16 80\nConnection to 10.110.162.16 80 port [tcp/http] succeeded!\nI0604 23:50:06.400750 599 log.go:172] (0xc000aff4a0) Data frame received for 3\nI0604 23:50:06.400775 599 log.go:172] (0xc0000dd0e0) (3) Data frame handling\nI0604 23:50:06.401043 599 log.go:172] (0xc000aff4a0) Data frame received for 5\nI0604 23:50:06.401060 599 log.go:172] (0xc000b066e0) (5) Data frame handling\nI0604 23:50:06.403155 599 log.go:172] (0xc000aff4a0) Data frame received for 1\nI0604 23:50:06.403179 599 log.go:172] (0xc000b06640) (1) Data frame handling\nI0604 23:50:06.403194 599 log.go:172] (0xc000b06640) (1) Data frame sent\nI0604 23:50:06.403224 599 log.go:172] (0xc000aff4a0) (0xc000b06640) Stream removed, broadcasting: 1\nI0604 23:50:06.403242 599 log.go:172] (0xc000aff4a0) Go away received\nI0604 23:50:06.403734 599 log.go:172] (0xc000aff4a0) (0xc000b06640) Stream removed, broadcasting: 1\nI0604 23:50:06.403756 599 log.go:172] (0xc000aff4a0) (0xc0000dd0e0) Stream removed, broadcasting: 3\nI0604 23:50:06.403767 599 log.go:172] (0xc000aff4a0) (0xc000b066e0) Stream removed, broadcasting: 5\n" Jun 4 23:50:06.410: INFO: stdout: "" Jun 4 23:50:06.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9168 execpod-affinity8c2xb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.162.16:80/ ; done' Jun 4 23:50:06.840: INFO: stderr: "I0604 23:50:06.549095 619 log.go:172] (0xc000a9e370) (0xc00055e320) Create stream\nI0604 23:50:06.549384 619 log.go:172] (0xc000a9e370) (0xc00055e320) Stream added, broadcasting: 1\nI0604 23:50:06.552877 619 log.go:172] (0xc000a9e370) Reply frame received for 1\nI0604 23:50:06.552910 619 log.go:172] (0xc000a9e370) (0xc0002f6780) Create stream\nI0604 23:50:06.552919 619 log.go:172] (0xc000a9e370) (0xc0002f6780) Stream added, broadcasting: 3\nI0604 23:50:06.554139 619 log.go:172] (0xc000a9e370) Reply frame received for 3\nI0604 23:50:06.554197 619 log.go:172] (0xc000a9e370) (0xc0002f7400) Create stream\nI0604 23:50:06.554212 619 log.go:172] (0xc000a9e370) (0xc0002f7400) Stream added, broadcasting: 5\nI0604 23:50:06.555156 619 log.go:172] (0xc000a9e370) Reply frame received for 5\nI0604 23:50:06.628028 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.628058 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.628078 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ seq 0 15\nI0604 23:50:06.636545 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.636584 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.636598 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.636618 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.636628 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.636639 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.738538 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.738572 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.738594 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.739316 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.739371 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.739411 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.739441 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.739469 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.739491 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.747671 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.747695 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.747715 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.748498 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.748532 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.748562 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.748593 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.748614 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.748652 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.756205 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.756232 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.756258 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.756802 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.756820 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.756827 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.756836 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.756840 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.756845 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.763628 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.763656 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.763780 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.764420 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.764443 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.764455 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.764547 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.764561 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.764576 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.768248 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.768292 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.768315 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.768625 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.768644 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.768669 619 log.go:172] (0xc0002f7400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.768696 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.768723 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.768748 619 log.go:172] (0xc0002f7400) (5) Data frame sent\nI0604 23:50:06.773690 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.773718 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.773727 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.774046 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.774076 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.774094 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.774156 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.774186 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.774210 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.778524 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.778543 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.778564 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.778951 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.778967 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.778976 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.778996 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.779005 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.779012 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.785441 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.785465 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.785488 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.786223 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.786249 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.786273 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.786295 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.786310 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.786327 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.790899 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.790927 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.790962 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.791845 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.791866 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.791876 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.791891 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.791905 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.791914 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.795664 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.795703 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.795733 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.796284 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.796304 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.796312 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.796325 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.796338 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.796346 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.800531 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.800550 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.800568 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.801383 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.801404 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.801421 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.801445 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.801468 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.801491 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.807724 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.807751 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.807784 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.808407 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.808495 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.808522 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.808569 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.808637 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.808694 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.813675 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.813707 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.813750 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.814147 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.814190 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.814208 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.814220 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.814281 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.814308 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.821009 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.821020 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.821026 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.821889 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.821925 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.821959 619 log.go:172] (0xc0002f7400) (5) Data frame sent\nI0604 23:50:06.821981 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.821997 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.822014 619 log.go:172] (0xc0002f6780) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.824934 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.824970 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.825006 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.825185 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.825201 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.825212 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n+ echo\n+ curlI0604 23:50:06.825444 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.825458 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.825468 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.825487 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.825519 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.825539 619 log.go:172] (0xc0002f7400) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:06.828871 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.828905 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.828943 619 log.go:172] (0xc0002f6780) (3) Data frame sent\nI0604 23:50:06.829627 619 log.go:172] (0xc000a9e370) Data frame received for 3\nI0604 23:50:06.829676 619 log.go:172] (0xc0002f6780) (3) Data frame handling\nI0604 23:50:06.829936 619 log.go:172] (0xc000a9e370) Data frame received for 5\nI0604 23:50:06.829957 619 log.go:172] (0xc0002f7400) (5) Data frame handling\nI0604 23:50:06.831623 619 log.go:172] (0xc000a9e370) Data frame received for 1\nI0604 23:50:06.831752 619 log.go:172] (0xc00055e320) (1) Data frame handling\nI0604 23:50:06.831806 619 log.go:172] (0xc00055e320) (1) Data frame sent\nI0604 23:50:06.831860 619 log.go:172] (0xc000a9e370) (0xc00055e320) Stream removed, broadcasting: 1\nI0604 23:50:06.831910 619 log.go:172] (0xc000a9e370) Go away received\nI0604 23:50:06.832202 619 log.go:172] (0xc000a9e370) (0xc00055e320) Stream removed, broadcasting: 1\nI0604 23:50:06.832224 619 log.go:172] (0xc000a9e370) (0xc0002f6780) Stream removed, broadcasting: 3\nI0604 23:50:06.832231 619 log.go:172] (0xc000a9e370) (0xc0002f7400) Stream removed, broadcasting: 5\n" Jun 4 23:50:06.841: INFO: stdout: "\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2\naffinity-clusterip-timeout-7mqp2" Jun 4 23:50:06.841: INFO: Received response from host: Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Received response from host: affinity-clusterip-timeout-7mqp2 Jun 4 23:50:06.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9168 execpod-affinity8c2xb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.110.162.16:80/' Jun 4 23:50:07.043: INFO: stderr: "I0604 23:50:06.974144 639 log.go:172] (0xc000b01970) (0xc000b98640) Create stream\nI0604 23:50:06.974208 639 log.go:172] (0xc000b01970) (0xc000b98640) Stream added, broadcasting: 1\nI0604 23:50:06.977922 639 log.go:172] (0xc000b01970) Reply frame received for 1\nI0604 23:50:06.977960 639 log.go:172] (0xc000b01970) (0xc0006e8e60) Create stream\nI0604 23:50:06.977972 639 log.go:172] (0xc000b01970) (0xc0006e8e60) Stream added, broadcasting: 3\nI0604 23:50:06.978576 639 log.go:172] (0xc000b01970) Reply frame received for 3\nI0604 23:50:06.978603 639 log.go:172] (0xc000b01970) (0xc000538f00) Create stream\nI0604 23:50:06.978609 639 log.go:172] (0xc000b01970) (0xc000538f00) Stream added, broadcasting: 5\nI0604 23:50:06.979197 639 log.go:172] (0xc000b01970) Reply frame received for 5\nI0604 23:50:07.028803 639 log.go:172] (0xc000b01970) Data frame received for 5\nI0604 23:50:07.028837 639 log.go:172] (0xc000538f00) (5) Data frame handling\nI0604 23:50:07.028857 639 log.go:172] (0xc000538f00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:07.034106 639 log.go:172] (0xc000b01970) Data frame received for 3\nI0604 23:50:07.034149 639 log.go:172] (0xc0006e8e60) (3) Data frame handling\nI0604 23:50:07.034187 639 log.go:172] (0xc0006e8e60) (3) Data frame sent\nI0604 23:50:07.035190 639 log.go:172] (0xc000b01970) Data frame received for 3\nI0604 23:50:07.035226 639 log.go:172] (0xc0006e8e60) (3) Data frame handling\nI0604 23:50:07.035334 639 log.go:172] (0xc000b01970) Data frame received for 5\nI0604 23:50:07.035362 639 log.go:172] (0xc000538f00) (5) Data frame handling\nI0604 23:50:07.036960 639 log.go:172] (0xc000b01970) Data frame received for 1\nI0604 23:50:07.036984 639 log.go:172] (0xc000b98640) (1) Data frame handling\nI0604 23:50:07.037000 639 log.go:172] (0xc000b98640) (1) Data frame sent\nI0604 23:50:07.037021 639 log.go:172] (0xc000b01970) (0xc000b98640) Stream removed, broadcasting: 1\nI0604 23:50:07.037041 639 log.go:172] (0xc000b01970) Go away received\nI0604 23:50:07.037598 639 log.go:172] (0xc000b01970) (0xc000b98640) Stream removed, broadcasting: 1\nI0604 23:50:07.037623 639 log.go:172] (0xc000b01970) (0xc0006e8e60) Stream removed, broadcasting: 3\nI0604 23:50:07.037633 639 log.go:172] (0xc000b01970) (0xc000538f00) Stream removed, broadcasting: 5\n" Jun 4 23:50:07.043: INFO: stdout: "affinity-clusterip-timeout-7mqp2" Jun 4 23:50:22.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9168 execpod-affinity8c2xb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.110.162.16:80/' Jun 4 23:50:22.271: INFO: stderr: "I0604 23:50:22.179610 659 log.go:172] (0xc00003a2c0) (0xc00044ec80) Create stream\nI0604 23:50:22.179698 659 log.go:172] (0xc00003a2c0) (0xc00044ec80) Stream added, broadcasting: 1\nI0604 23:50:22.182250 659 log.go:172] (0xc00003a2c0) Reply frame received for 1\nI0604 23:50:22.182310 659 log.go:172] (0xc00003a2c0) (0xc00051b0e0) Create stream\nI0604 23:50:22.182338 659 log.go:172] (0xc00003a2c0) (0xc00051b0e0) Stream added, broadcasting: 3\nI0604 23:50:22.183394 659 log.go:172] (0xc00003a2c0) Reply frame received for 3\nI0604 23:50:22.183433 659 log.go:172] (0xc00003a2c0) (0xc0006c4460) Create stream\nI0604 23:50:22.183461 659 log.go:172] (0xc00003a2c0) (0xc0006c4460) Stream added, broadcasting: 5\nI0604 23:50:22.184550 659 log.go:172] (0xc00003a2c0) Reply frame received for 5\nI0604 23:50:22.258020 659 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0604 23:50:22.258042 659 log.go:172] (0xc0006c4460) (5) Data frame handling\nI0604 23:50:22.258055 659 log.go:172] (0xc0006c4460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.110.162.16:80/\nI0604 23:50:22.263556 659 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0604 23:50:22.263590 659 log.go:172] (0xc00051b0e0) (3) Data frame handling\nI0604 23:50:22.263613 659 log.go:172] (0xc00051b0e0) (3) Data frame sent\nI0604 23:50:22.264163 659 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0604 23:50:22.264179 659 log.go:172] (0xc00051b0e0) (3) Data frame handling\nI0604 23:50:22.264271 659 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0604 23:50:22.264306 659 log.go:172] (0xc0006c4460) (5) Data frame handling\nI0604 23:50:22.266151 659 log.go:172] (0xc00003a2c0) Data frame received for 1\nI0604 23:50:22.266185 659 log.go:172] (0xc00044ec80) (1) Data frame handling\nI0604 23:50:22.266203 659 log.go:172] (0xc00044ec80) (1) Data frame sent\nI0604 23:50:22.266339 659 log.go:172] (0xc00003a2c0) (0xc00044ec80) Stream removed, broadcasting: 1\nI0604 23:50:22.266438 659 log.go:172] (0xc00003a2c0) Go away received\nI0604 23:50:22.266850 659 log.go:172] (0xc00003a2c0) (0xc00044ec80) Stream removed, broadcasting: 1\nI0604 23:50:22.266886 659 log.go:172] (0xc00003a2c0) (0xc00051b0e0) Stream removed, broadcasting: 3\nI0604 23:50:22.266903 659 log.go:172] (0xc00003a2c0) (0xc0006c4460) Stream removed, broadcasting: 5\n" Jun 4 23:50:22.271: INFO: stdout: "affinity-clusterip-timeout-wgq8c" Jun 4 23:50:22.271: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9168, will wait for the garbage collector to delete the pods Jun 4 23:50:22.414: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 23.820558ms Jun 4 23:50:23.415: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 1.000247314s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:50:34.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9168" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:50.791 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":56,"skipped":952,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:50:34.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 4 23:50:35.087: INFO: Waiting up to 5m0s for pod "downward-api-95586b31-0501-4415-9194-40fccfc06e3b" in namespace "downward-api-9903" to be "Succeeded or Failed" Jun 4 23:50:35.094: INFO: Pod "downward-api-95586b31-0501-4415-9194-40fccfc06e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.310667ms Jun 4 23:50:37.102: INFO: Pod "downward-api-95586b31-0501-4415-9194-40fccfc06e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014885146s Jun 4 23:50:39.106: INFO: Pod "downward-api-95586b31-0501-4415-9194-40fccfc06e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019285795s STEP: Saw pod success Jun 4 23:50:39.106: INFO: Pod "downward-api-95586b31-0501-4415-9194-40fccfc06e3b" satisfied condition "Succeeded or Failed" Jun 4 23:50:39.109: INFO: Trying to get logs from node latest-worker pod downward-api-95586b31-0501-4415-9194-40fccfc06e3b container dapi-container: STEP: delete the pod Jun 4 23:50:39.164: INFO: Waiting for pod downward-api-95586b31-0501-4415-9194-40fccfc06e3b to disappear Jun 4 23:50:39.170: INFO: Pod downward-api-95586b31-0501-4415-9194-40fccfc06e3b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:50:39.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9903" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":57,"skipped":973,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:50:39.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 4 23:50:39.267: INFO: Created pod &Pod{ObjectMeta:{dns-5825 dns-5825 /api/v1/namespaces/dns-5825/pods/dns-5825 d21a1791-9622-4726-b108-b43db3f5e35c 10327020 0 2020-06-04 23:50:39 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-06-04 23:50:39 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrdgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrdgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrdgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 4 23:50:39.306: INFO: The status of Pod dns-5825 is Pending, waiting for it to be Running (with Ready = true) Jun 4 23:50:41.311: INFO: The status of Pod dns-5825 is Pending, waiting for it to be Running (with Ready = true) Jun 4 23:50:43.311: INFO: The status of Pod dns-5825 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jun 4 23:50:43.311: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5825 PodName:dns-5825 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 23:50:43.311: INFO: >>> kubeConfig: /root/.kube/config I0604 23:50:43.359398 7 log.go:172] (0xc0029e64d0) (0xc001b620a0) Create stream I0604 23:50:43.359447 7 log.go:172] (0xc0029e64d0) (0xc001b620a0) Stream added, broadcasting: 1 I0604 23:50:43.361786 7 log.go:172] (0xc0029e64d0) Reply frame received for 1 I0604 23:50:43.361840 7 log.go:172] (0xc0029e64d0) (0xc0018240a0) Create stream I0604 23:50:43.361856 7 log.go:172] (0xc0029e64d0) (0xc0018240a0) Stream added, broadcasting: 3 I0604 23:50:43.363329 7 log.go:172] (0xc0029e64d0) Reply frame received for 3 I0604 23:50:43.363378 7 log.go:172] (0xc0029e64d0) (0xc0018243c0) Create stream I0604 23:50:43.363396 7 log.go:172] (0xc0029e64d0) (0xc0018243c0) Stream added, broadcasting: 5 I0604 23:50:43.364816 7 log.go:172] (0xc0029e64d0) Reply frame received for 5 I0604 23:50:43.461811 7 log.go:172] (0xc0029e64d0) Data frame received for 3 I0604 23:50:43.461842 7 log.go:172] (0xc0018240a0) (3) Data frame handling I0604 23:50:43.461860 7 log.go:172] (0xc0018240a0) (3) Data frame sent I0604 23:50:43.462896 7 log.go:172] (0xc0029e64d0) Data frame received for 5 I0604 23:50:43.462915 7 log.go:172] (0xc0018243c0) (5) Data frame handling I0604 23:50:43.462985 7 log.go:172] (0xc0029e64d0) Data frame received for 3 I0604 23:50:43.462999 7 log.go:172] (0xc0018240a0) (3) Data frame handling I0604 23:50:43.464398 7 log.go:172] (0xc0029e64d0) Data frame received for 1 I0604 23:50:43.464418 7 log.go:172] (0xc001b620a0) (1) Data frame handling I0604 23:50:43.464437 7 log.go:172] (0xc001b620a0) (1) Data frame sent I0604 23:50:43.464450 7 log.go:172] (0xc0029e64d0) (0xc001b620a0) Stream removed, broadcasting: 1 I0604 23:50:43.464463 7 log.go:172] (0xc0029e64d0) Go away received I0604 23:50:43.464904 7 log.go:172] (0xc0029e64d0) (0xc001b620a0) Stream removed, broadcasting: 1 I0604 23:50:43.464927 7 log.go:172] (0xc0029e64d0) (0xc0018240a0) Stream removed, broadcasting: 3 I0604 23:50:43.464937 7 log.go:172] (0xc0029e64d0) (0xc0018243c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jun 4 23:50:43.464: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5825 PodName:dns-5825 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 23:50:43.465: INFO: >>> kubeConfig: /root/.kube/config I0604 23:50:43.492300 7 log.go:172] (0xc002950f20) (0xc001efa320) Create stream I0604 23:50:43.492337 7 log.go:172] (0xc002950f20) (0xc001efa320) Stream added, broadcasting: 1 I0604 23:50:43.494604 7 log.go:172] (0xc002950f20) Reply frame received for 1 I0604 23:50:43.494651 7 log.go:172] (0xc002950f20) (0xc001b62140) Create stream I0604 23:50:43.494665 7 log.go:172] (0xc002950f20) (0xc001b62140) Stream added, broadcasting: 3 I0604 23:50:43.495598 7 log.go:172] (0xc002950f20) Reply frame received for 3 I0604 23:50:43.495662 7 log.go:172] (0xc002950f20) (0xc001b62280) Create stream I0604 23:50:43.495690 7 log.go:172] (0xc002950f20) (0xc001b62280) Stream added, broadcasting: 5 I0604 23:50:43.496431 7 log.go:172] (0xc002950f20) Reply frame received for 5 I0604 23:50:43.575909 7 log.go:172] (0xc002950f20) Data frame received for 3 I0604 23:50:43.575955 7 log.go:172] (0xc001b62140) (3) Data frame handling I0604 23:50:43.575988 7 log.go:172] (0xc001b62140) (3) Data frame sent I0604 23:50:43.576820 7 log.go:172] (0xc002950f20) Data frame received for 3 I0604 23:50:43.576859 7 log.go:172] (0xc001b62140) (3) Data frame handling I0604 23:50:43.577095 7 log.go:172] (0xc002950f20) Data frame received for 5 I0604 23:50:43.577327 7 log.go:172] (0xc001b62280) (5) Data frame handling I0604 23:50:43.578854 7 log.go:172] (0xc002950f20) Data frame received for 1 I0604 23:50:43.578887 7 log.go:172] (0xc001efa320) (1) Data frame handling I0604 23:50:43.578907 7 log.go:172] (0xc001efa320) (1) Data frame sent I0604 23:50:43.578929 7 log.go:172] (0xc002950f20) (0xc001efa320) Stream removed, broadcasting: 1 I0604 23:50:43.578949 7 log.go:172] (0xc002950f20) Go away received I0604 23:50:43.579188 7 log.go:172] (0xc002950f20) (0xc001efa320) Stream removed, broadcasting: 1 I0604 23:50:43.579215 7 log.go:172] (0xc002950f20) (0xc001b62140) Stream removed, broadcasting: 3 I0604 23:50:43.579245 7 log.go:172] (0xc002950f20) (0xc001b62280) Stream removed, broadcasting: 5 Jun 4 23:50:43.579: INFO: Deleting pod dns-5825... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:50:43.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5825" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":58,"skipped":984,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:50:43.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jun 4 23:50:43.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6125' Jun 4 23:50:44.144: INFO: stderr: "" Jun 4 23:50:44.144: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 4 23:50:45.148: INFO: Selector matched 1 pods for map[app:agnhost] Jun 4 23:50:45.148: INFO: Found 0 / 1 Jun 4 23:50:46.173: INFO: Selector matched 1 pods for map[app:agnhost] Jun 4 23:50:46.174: INFO: Found 0 / 1 Jun 4 23:50:47.156: INFO: Selector matched 1 pods for map[app:agnhost] Jun 4 23:50:47.156: INFO: Found 1 / 1 Jun 4 23:50:47.156: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 4 23:50:47.158: INFO: Selector matched 1 pods for map[app:agnhost] Jun 4 23:50:47.158: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 4 23:50:47.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-hblwd --namespace=kubectl-6125 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 4 23:50:47.276: INFO: stderr: "" Jun 4 23:50:47.276: INFO: stdout: "pod/agnhost-master-hblwd patched\n" STEP: checking annotations Jun 4 23:50:47.280: INFO: Selector matched 1 pods for map[app:agnhost] Jun 4 23:50:47.280: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:50:47.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6125" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":59,"skipped":995,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:50:47.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:50:47.374: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 4 23:50:47.406: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 4 23:50:52.417: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 4 23:50:52.417: INFO: Creating deployment "test-rolling-update-deployment" Jun 4 23:50:52.424: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 4 23:50:52.455: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 4 23:50:54.549: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 4 23:50:54.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911452, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911452, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911452, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911452, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 23:50:56.556: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 4 23:50:56.567: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1647 /apis/apps/v1/namespaces/deployment-1647/deployments/test-rolling-update-deployment 932a5bf6-ca74-47cf-815e-f1f902aa3935 10327208 1 2020-06-04 23:50:52 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-06-04 23:50:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-04 23:50:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00442af48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-04 23:50:52 +0000 UTC,LastTransitionTime:2020-06-04 23:50:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-06-04 23:50:56 +0000 UTC,LastTransitionTime:2020-06-04 23:50:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 4 23:50:56.570: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-1647 /apis/apps/v1/namespaces/deployment-1647/replicasets/test-rolling-update-deployment-df7bb669b 0550dbc2-7cc2-49fa-95f4-6453e6ae4299 10327197 1 2020-06-04 23:50:52 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 932a5bf6-ca74-47cf-815e-f1f902aa3935 0xc00442b7c0 0xc00442b7c1}] [] [{kube-controller-manager Update apps/v1 2020-06-04 23:50:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932a5bf6-ca74-47cf-815e-f1f902aa3935\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00442b8b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 4 23:50:56.571: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 4 23:50:56.571: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1647 /apis/apps/v1/namespaces/deployment-1647/replicasets/test-rolling-update-controller 469ffd63-127e-4ea1-8684-489748b053e9 10327206 2 2020-06-04 23:50:47 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 932a5bf6-ca74-47cf-815e-f1f902aa3935 0xc00442b5ff 0xc00442b610}] [] [{e2e.test Update apps/v1 2020-06-04 23:50:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-04 23:50:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"932a5bf6-ca74-47cf-815e-f1f902aa3935\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00442b708 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 4 23:50:56.575: INFO: Pod "test-rolling-update-deployment-df7bb669b-fxbd8" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-fxbd8 test-rolling-update-deployment-df7bb669b- deployment-1647 /api/v1/namespaces/deployment-1647/pods/test-rolling-update-deployment-df7bb669b-fxbd8 ceeabf68-44f5-4203-b853-36c0acc80612 10327196 0 2020-06-04 23:50:52 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 0550dbc2-7cc2-49fa-95f4-6453e6ae4299 0xc0052c8bf0 0xc0052c8bf1}] [] [{kube-controller-manager Update v1 2020-06-04 23:50:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0550dbc2-7cc2-49fa-95f4-6453e6ae4299\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-04 23:50:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvn92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvn92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvn92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-04 23:50:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-04 23:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-04 23:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-04 23:50:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.11,StartTime:2020-06-04 23:50:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-04 23:50:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://dab301ecb0c035f27b0d7e273211f68be9ab1e278cf70196d9320d9622c261db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:50:56.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1647" for this suite. • [SLOW TEST:9.295 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":60,"skipped":1010,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:50:56.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-47a3f65a-c325-45d6-b49c-a753c21e859a STEP: Creating a pod to test consume configMaps Jun 4 23:50:56.942: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7" in namespace "configmap-5965" to be "Succeeded or Failed" Jun 4 23:50:56.951: INFO: Pod "pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.278333ms Jun 4 23:50:58.955: INFO: Pod "pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013199441s Jun 4 23:51:01.156: INFO: Pod "pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213734693s Jun 4 23:51:03.160: INFO: Pod "pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.217754587s STEP: Saw pod success Jun 4 23:51:03.160: INFO: Pod "pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7" satisfied condition "Succeeded or Failed" Jun 4 23:51:03.163: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7 container configmap-volume-test: STEP: delete the pod Jun 4 23:51:03.199: INFO: Waiting for pod pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7 to disappear Jun 4 23:51:03.231: INFO: Pod pod-configmaps-0c11e06d-7e39-4845-b6af-f02dd75964d7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:51:03.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5965" for this suite. • [SLOW TEST:6.655 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":1014,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:51:03.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jun 4 23:51:03.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3236' Jun 4 23:51:03.778: INFO: stderr: "" Jun 4 23:51:03.778: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 4 23:51:03.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3236' Jun 4 23:51:03.936: INFO: stderr: "" Jun 4 23:51:03.936: INFO: stdout: "update-demo-nautilus-rjkgx update-demo-nautilus-v6sgm " Jun 4 23:51:03.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rjkgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3236' Jun 4 23:51:04.053: INFO: stderr: "" Jun 4 23:51:04.053: INFO: stdout: "" Jun 4 23:51:04.053: INFO: update-demo-nautilus-rjkgx is created but not running Jun 4 23:51:09.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3236' Jun 4 23:51:09.172: INFO: stderr: "" Jun 4 23:51:09.172: INFO: stdout: "update-demo-nautilus-rjkgx update-demo-nautilus-v6sgm " Jun 4 23:51:09.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rjkgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3236' Jun 4 23:51:09.264: INFO: stderr: "" Jun 4 23:51:09.264: INFO: stdout: "true" Jun 4 23:51:09.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rjkgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3236' Jun 4 23:51:09.358: INFO: stderr: "" Jun 4 23:51:09.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 23:51:09.358: INFO: validating pod update-demo-nautilus-rjkgx Jun 4 23:51:09.382: INFO: got data: { "image": "nautilus.jpg" } Jun 4 23:51:09.382: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 23:51:09.382: INFO: update-demo-nautilus-rjkgx is verified up and running Jun 4 23:51:09.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v6sgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3236' Jun 4 23:51:09.473: INFO: stderr: "" Jun 4 23:51:09.473: INFO: stdout: "true" Jun 4 23:51:09.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v6sgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3236' Jun 4 23:51:09.569: INFO: stderr: "" Jun 4 23:51:09.569: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 23:51:09.570: INFO: validating pod update-demo-nautilus-v6sgm Jun 4 23:51:09.583: INFO: got data: { "image": "nautilus.jpg" } Jun 4 23:51:09.583: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 23:51:09.583: INFO: update-demo-nautilus-v6sgm is verified up and running STEP: using delete to clean up resources Jun 4 23:51:09.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3236' Jun 4 23:51:09.689: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 23:51:09.689: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 4 23:51:09.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3236' Jun 4 23:51:09.788: INFO: stderr: "No resources found in kubectl-3236 namespace.\n" Jun 4 23:51:09.788: INFO: stdout: "" Jun 4 23:51:09.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3236 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 4 23:51:09.894: INFO: stderr: "" Jun 4 23:51:09.894: INFO: stdout: "update-demo-nautilus-rjkgx\nupdate-demo-nautilus-v6sgm\n" Jun 4 23:51:10.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3236' Jun 4 23:51:10.525: INFO: stderr: "No resources found in kubectl-3236 namespace.\n" Jun 4 23:51:10.525: INFO: stdout: "" Jun 4 23:51:10.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3236 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 4 23:51:10.625: INFO: stderr: "" Jun 4 23:51:10.625: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:51:10.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3236" for this suite. • [SLOW TEST:7.394 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":62,"skipped":1018,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:51:10.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:51:26.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6913" for this suite. • [SLOW TEST:16.238 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":63,"skipped":1038,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:51:26.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 4 23:51:27.615: INFO: Pod name wrapped-volume-race-001b8510-3f2a-49c8-9d0f-459354e8234e: Found 0 pods out of 5 Jun 4 23:51:32.624: INFO: Pod name wrapped-volume-race-001b8510-3f2a-49c8-9d0f-459354e8234e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-001b8510-3f2a-49c8-9d0f-459354e8234e in namespace emptydir-wrapper-121, will wait for the garbage collector to delete the pods Jun 4 23:51:48.717: INFO: Deleting ReplicationController wrapped-volume-race-001b8510-3f2a-49c8-9d0f-459354e8234e took: 8.494256ms Jun 4 23:51:49.117: INFO: Terminating ReplicationController wrapped-volume-race-001b8510-3f2a-49c8-9d0f-459354e8234e pods took: 400.305553ms STEP: Creating RC which spawns configmap-volume pods Jun 4 23:52:05.084: INFO: Pod name wrapped-volume-race-2a6040c5-80d8-44f6-9cbd-7a4b7a15041b: Found 0 pods out of 5 Jun 4 23:52:10.094: INFO: Pod name wrapped-volume-race-2a6040c5-80d8-44f6-9cbd-7a4b7a15041b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2a6040c5-80d8-44f6-9cbd-7a4b7a15041b in namespace emptydir-wrapper-121, will wait for the garbage collector to delete the pods Jun 4 23:52:26.390: INFO: Deleting ReplicationController wrapped-volume-race-2a6040c5-80d8-44f6-9cbd-7a4b7a15041b took: 7.660562ms Jun 4 23:52:26.791: INFO: Terminating ReplicationController wrapped-volume-race-2a6040c5-80d8-44f6-9cbd-7a4b7a15041b pods took: 400.310487ms STEP: Creating RC which spawns configmap-volume pods Jun 4 23:52:35.435: INFO: Pod name wrapped-volume-race-93abddad-9b13-407f-b963-ba9e2986c38c: Found 0 pods out of 5 Jun 4 23:52:40.444: INFO: Pod name wrapped-volume-race-93abddad-9b13-407f-b963-ba9e2986c38c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-93abddad-9b13-407f-b963-ba9e2986c38c in namespace emptydir-wrapper-121, will wait for the garbage collector to delete the pods Jun 4 23:52:52.652: INFO: Deleting ReplicationController wrapped-volume-race-93abddad-9b13-407f-b963-ba9e2986c38c took: 15.266828ms Jun 4 23:52:52.952: INFO: Terminating ReplicationController wrapped-volume-race-93abddad-9b13-407f-b963-ba9e2986c38c pods took: 300.204533ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:53:05.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-121" for this suite. • [SLOW TEST:98.857 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":64,"skipped":1041,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:53:05.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 4 23:53:05.836: INFO: Waiting up to 5m0s for pod "pod-9e654e6b-df65-446a-8296-e454d38c09d4" in namespace "emptydir-4126" to be "Succeeded or Failed" Jun 4 23:53:05.846: INFO: Pod "pod-9e654e6b-df65-446a-8296-e454d38c09d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035743ms Jun 4 23:53:07.850: INFO: Pod "pod-9e654e6b-df65-446a-8296-e454d38c09d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01427299s Jun 4 23:53:09.855: INFO: Pod "pod-9e654e6b-df65-446a-8296-e454d38c09d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018568637s STEP: Saw pod success Jun 4 23:53:09.855: INFO: Pod "pod-9e654e6b-df65-446a-8296-e454d38c09d4" satisfied condition "Succeeded or Failed" Jun 4 23:53:09.858: INFO: Trying to get logs from node latest-worker2 pod pod-9e654e6b-df65-446a-8296-e454d38c09d4 container test-container: STEP: delete the pod Jun 4 23:53:09.943: INFO: Waiting for pod pod-9e654e6b-df65-446a-8296-e454d38c09d4 to disappear Jun 4 23:53:09.951: INFO: Pod pod-9e654e6b-df65-446a-8296-e454d38c09d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:53:09.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4126" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:53:09.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:53:14.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3505" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":66,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:53:14.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-7dd5a512-cd10-41b4-8a51-d3971c0df760 in namespace container-probe-601 Jun 4 23:53:18.378: INFO: Started pod busybox-7dd5a512-cd10-41b4-8a51-d3971c0df760 in namespace container-probe-601 STEP: checking the pod's current state and verifying that restartCount is present Jun 4 23:53:18.381: INFO: Initial restart count of pod busybox-7dd5a512-cd10-41b4-8a51-d3971c0df760 is 0 Jun 4 23:54:06.678: INFO: Restart count of pod container-probe-601/busybox-7dd5a512-cd10-41b4-8a51-d3971c0df760 is now 1 (48.297498622s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:54:06.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-601" for this suite. • [SLOW TEST:52.572 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":67,"skipped":1142,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:54:06.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:54:23.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4398" for this suite. • [SLOW TEST:16.481 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":68,"skipped":1148,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:54:23.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-108b0be8-805b-4e4e-bb4f-aa91e7151161 in namespace container-probe-1336 Jun 4 23:54:27.412: INFO: Started pod liveness-108b0be8-805b-4e4e-bb4f-aa91e7151161 in namespace container-probe-1336 STEP: checking the pod's current state and verifying that restartCount is present Jun 4 23:54:27.415: INFO: Initial restart count of pod liveness-108b0be8-805b-4e4e-bb4f-aa91e7151161 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:58:28.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1336" for this suite. • [SLOW TEST:244.864 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1153,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:58:28.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Jun 4 23:58:28.593: INFO: Waiting up to 5m0s for pod "var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471" in namespace "var-expansion-9168" to be "Succeeded or Failed" Jun 4 23:58:28.623: INFO: Pod "var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471": Phase="Pending", Reason="", readiness=false. Elapsed: 29.923813ms Jun 4 23:58:30.628: INFO: Pod "var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03440453s Jun 4 23:58:32.632: INFO: Pod "var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471": Phase="Running", Reason="", readiness=true. Elapsed: 4.038839377s Jun 4 23:58:34.637: INFO: Pod "var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043689863s STEP: Saw pod success Jun 4 23:58:34.637: INFO: Pod "var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471" satisfied condition "Succeeded or Failed" Jun 4 23:58:34.640: INFO: Trying to get logs from node latest-worker pod var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471 container dapi-container: STEP: delete the pod Jun 4 23:58:34.724: INFO: Waiting for pod var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471 to disappear Jun 4 23:58:34.735: INFO: Pod var-expansion-77432ae2-6a53-46b0-be34-64a2d9b0c471 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:58:34.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9168" for this suite. • [SLOW TEST:6.656 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:58:34.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 4 23:58:34.872: INFO: Waiting up to 5m0s for pod "pod-7d6bc46c-f97d-4968-8e51-36b668048d99" in namespace "emptydir-8930" to be "Succeeded or Failed" Jun 4 23:58:34.893: INFO: Pod "pod-7d6bc46c-f97d-4968-8e51-36b668048d99": Phase="Pending", Reason="", readiness=false. Elapsed: 20.822178ms Jun 4 23:58:38.142: INFO: Pod "pod-7d6bc46c-f97d-4968-8e51-36b668048d99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269100067s Jun 4 23:58:40.543: INFO: Pod "pod-7d6bc46c-f97d-4968-8e51-36b668048d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.670034814s STEP: Saw pod success Jun 4 23:58:40.543: INFO: Pod "pod-7d6bc46c-f97d-4968-8e51-36b668048d99" satisfied condition "Succeeded or Failed" Jun 4 23:58:40.545: INFO: Trying to get logs from node latest-worker pod pod-7d6bc46c-f97d-4968-8e51-36b668048d99 container test-container: STEP: delete the pod Jun 4 23:58:41.306: INFO: Waiting for pod pod-7d6bc46c-f97d-4968-8e51-36b668048d99 to disappear Jun 4 23:58:41.353: INFO: Pod pod-7d6bc46c-f97d-4968-8e51-36b668048d99 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:58:41.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8930" for this suite. • [SLOW TEST:6.705 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1181,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:58:41.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 4 23:58:45.690: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:58:45.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6048" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":72,"skipped":1188,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:58:45.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 4 23:58:45.824: INFO: Waiting up to 5m0s for pod "pod-76215813-b718-476e-b482-8c74529dd537" in namespace "emptydir-5846" to be "Succeeded or Failed" Jun 4 23:58:45.827: INFO: Pod "pod-76215813-b718-476e-b482-8c74529dd537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.716096ms Jun 4 23:58:47.832: INFO: Pod "pod-76215813-b718-476e-b482-8c74529dd537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007760459s Jun 4 23:58:49.837: INFO: Pod "pod-76215813-b718-476e-b482-8c74529dd537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013161501s STEP: Saw pod success Jun 4 23:58:49.837: INFO: Pod "pod-76215813-b718-476e-b482-8c74529dd537" satisfied condition "Succeeded or Failed" Jun 4 23:58:49.840: INFO: Trying to get logs from node latest-worker2 pod pod-76215813-b718-476e-b482-8c74529dd537 container test-container: STEP: delete the pod Jun 4 23:58:49.883: INFO: Waiting for pod pod-76215813-b718-476e-b482-8c74529dd537 to disappear Jun 4 23:58:49.891: INFO: Pod pod-76215813-b718-476e-b482-8c74529dd537 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:58:49.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5846" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":73,"skipped":1193,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:58:49.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4059 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 4 23:58:49.978: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 4 23:58:50.035: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 4 23:58:52.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 4 23:58:54.049: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 4 23:58:56.043: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 4 23:58:58.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 4 23:59:00.040: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 4 23:59:02.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 4 23:59:04.040: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 4 23:59:06.038: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 4 23:59:08.038: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 4 23:59:08.051: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 4 23:59:12.214: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.105 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4059 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 23:59:12.214: INFO: >>> kubeConfig: /root/.kube/config I0604 23:59:12.245801 7 log.go:172] (0xc002a6c000) (0xc001047220) Create stream I0604 23:59:12.245849 7 log.go:172] (0xc002a6c000) (0xc001047220) Stream added, broadcasting: 1 I0604 23:59:12.248642 7 log.go:172] (0xc002a6c000) Reply frame received for 1 I0604 23:59:12.248685 7 log.go:172] (0xc002a6c000) (0xc000c617c0) Create stream I0604 23:59:12.248700 7 log.go:172] (0xc002a6c000) (0xc000c617c0) Stream added, broadcasting: 3 I0604 23:59:12.249868 7 log.go:172] (0xc002a6c000) Reply frame received for 3 I0604 23:59:12.249903 7 log.go:172] (0xc002a6c000) (0xc001047360) Create stream I0604 23:59:12.249916 7 log.go:172] (0xc002a6c000) (0xc001047360) Stream added, broadcasting: 5 I0604 23:59:12.250794 7 log.go:172] (0xc002a6c000) Reply frame received for 5 I0604 23:59:13.342744 7 log.go:172] (0xc002a6c000) Data frame received for 3 I0604 23:59:13.342792 7 log.go:172] (0xc000c617c0) (3) Data frame handling I0604 23:59:13.342827 7 log.go:172] (0xc000c617c0) (3) Data frame sent I0604 23:59:13.342861 7 log.go:172] (0xc002a6c000) Data frame received for 3 I0604 23:59:13.342889 7 log.go:172] (0xc000c617c0) (3) Data frame handling I0604 23:59:13.343198 7 log.go:172] (0xc002a6c000) Data frame received for 5 I0604 23:59:13.343226 7 log.go:172] (0xc001047360) (5) Data frame handling I0604 23:59:13.346218 7 log.go:172] (0xc002a6c000) Data frame received for 1 I0604 23:59:13.346252 7 log.go:172] (0xc001047220) (1) Data frame handling I0604 23:59:13.346288 7 log.go:172] (0xc001047220) (1) Data frame sent I0604 23:59:13.346590 7 log.go:172] (0xc002a6c000) (0xc001047220) Stream removed, broadcasting: 1 I0604 23:59:13.346632 7 log.go:172] (0xc002a6c000) Go away received I0604 23:59:13.346840 7 log.go:172] (0xc002a6c000) (0xc001047220) Stream removed, broadcasting: 1 I0604 23:59:13.346865 7 log.go:172] (0xc002a6c000) (0xc000c617c0) Stream removed, broadcasting: 3 I0604 23:59:13.346882 7 log.go:172] (0xc002a6c000) (0xc001047360) Stream removed, broadcasting: 5 Jun 4 23:59:13.346: INFO: Found all expected endpoints: [netserver-0] Jun 4 23:59:13.351: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.22 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4059 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 23:59:13.351: INFO: >>> kubeConfig: /root/.kube/config I0604 23:59:13.385710 7 log.go:172] (0xc0029e64d0) (0xc0013ba960) Create stream I0604 23:59:13.385761 7 log.go:172] (0xc0029e64d0) (0xc0013ba960) Stream added, broadcasting: 1 I0604 23:59:13.388515 7 log.go:172] (0xc0029e64d0) Reply frame received for 1 I0604 23:59:13.388560 7 log.go:172] (0xc0029e64d0) (0xc000c32000) Create stream I0604 23:59:13.388577 7 log.go:172] (0xc0029e64d0) (0xc000c32000) Stream added, broadcasting: 3 I0604 23:59:13.389662 7 log.go:172] (0xc0029e64d0) Reply frame received for 3 I0604 23:59:13.389703 7 log.go:172] (0xc0029e64d0) (0xc0013baaa0) Create stream I0604 23:59:13.389725 7 log.go:172] (0xc0029e64d0) (0xc0013baaa0) Stream added, broadcasting: 5 I0604 23:59:13.390673 7 log.go:172] (0xc0029e64d0) Reply frame received for 5 I0604 23:59:14.466291 7 log.go:172] (0xc0029e64d0) Data frame received for 3 I0604 23:59:14.466347 7 log.go:172] (0xc000c32000) (3) Data frame handling I0604 23:59:14.466364 7 log.go:172] (0xc000c32000) (3) Data frame sent I0604 23:59:14.466388 7 log.go:172] (0xc0029e64d0) Data frame received for 3 I0604 23:59:14.466403 7 log.go:172] (0xc000c32000) (3) Data frame handling I0604 23:59:14.466581 7 log.go:172] (0xc0029e64d0) Data frame received for 5 I0604 23:59:14.466605 7 log.go:172] (0xc0013baaa0) (5) Data frame handling I0604 23:59:14.468888 7 log.go:172] (0xc0029e64d0) Data frame received for 1 I0604 23:59:14.468909 7 log.go:172] (0xc0013ba960) (1) Data frame handling I0604 23:59:14.468926 7 log.go:172] (0xc0013ba960) (1) Data frame sent I0604 23:59:14.468943 7 log.go:172] (0xc0029e64d0) (0xc0013ba960) Stream removed, broadcasting: 1 I0604 23:59:14.468981 7 log.go:172] (0xc0029e64d0) Go away received I0604 23:59:14.469101 7 log.go:172] (0xc0029e64d0) (0xc0013ba960) Stream removed, broadcasting: 1 I0604 23:59:14.469341 7 log.go:172] (0xc0029e64d0) (0xc000c32000) Stream removed, broadcasting: 3 I0604 23:59:14.469354 7 log.go:172] (0xc0029e64d0) (0xc0013baaa0) Stream removed, broadcasting: 5 Jun 4 23:59:14.469: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:59:14.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4059" for this suite. • [SLOW TEST:24.579 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":74,"skipped":1201,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:59:14.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 4 23:59:14.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621" in namespace "downward-api-4085" to be "Succeeded or Failed" Jun 4 23:59:14.593: INFO: Pod "downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621": Phase="Pending", Reason="", readiness=false. Elapsed: 15.903714ms Jun 4 23:59:16.599: INFO: Pod "downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021271059s Jun 4 23:59:18.602: INFO: Pod "downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02503789s STEP: Saw pod success Jun 4 23:59:18.602: INFO: Pod "downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621" satisfied condition "Succeeded or Failed" Jun 4 23:59:18.605: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621 container client-container: STEP: delete the pod Jun 4 23:59:18.640: INFO: Waiting for pod downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621 to disappear Jun 4 23:59:18.648: INFO: Pod downwardapi-volume-d714d46c-6279-4538-9d98-1117dfd3b621 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:59:18.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4085" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:59:18.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 4 23:59:18.774: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Pending, waiting for it to be Running (with Ready = true) Jun 4 23:59:20.806: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Pending, waiting for it to be Running (with Ready = true) Jun 4 23:59:22.781: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:24.779: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:26.778: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:28.779: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:30.779: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:32.779: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:34.779: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:36.779: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:38.779: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = false) Jun 4 23:59:40.778: INFO: The status of Pod test-webserver-7b483056-7145-422c-a96f-2ac5c170fb28 is Running (Ready = true) Jun 4 23:59:40.780: INFO: Container started at 2020-06-04 23:59:21 +0000 UTC, pod became ready at 2020-06-04 23:59:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:59:40.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2394" for this suite. • [SLOW TEST:22.133 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1250,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:59:40.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-911a286b-66b5-4ad3-95fc-22c8ac209d5c STEP: Creating a pod to test consume configMaps Jun 4 23:59:40.907: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3" in namespace "configmap-714" to be "Succeeded or Failed" Jun 4 23:59:40.917: INFO: Pod "pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.171277ms Jun 4 23:59:42.922: INFO: Pod "pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015169846s Jun 4 23:59:44.968: INFO: Pod "pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3": Phase="Running", Reason="", readiness=true. Elapsed: 4.061321105s Jun 4 23:59:46.973: INFO: Pod "pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06596618s STEP: Saw pod success Jun 4 23:59:46.973: INFO: Pod "pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3" satisfied condition "Succeeded or Failed" Jun 4 23:59:46.976: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3 container configmap-volume-test: STEP: delete the pod Jun 4 23:59:47.048: INFO: Waiting for pod pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3 to disappear Jun 4 23:59:47.056: INFO: Pod pod-configmaps-d5890e77-cae8-4002-b3d4-60c25ab2a4a3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:59:47.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-714" for this suite. • [SLOW TEST:6.276 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1254,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:59:47.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 4 23:59:48.033: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 4 23:59:50.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911988, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911988, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911988, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726911987, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 4 23:59:53.125: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 4 23:59:53.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3464" for this suite. STEP: Destroying namespace "webhook-3464-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.411 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":78,"skipped":1269,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 4 23:59:53.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 5 00:00:01.643: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:01.662: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:03.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:03.668: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:05.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:05.667: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:07.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:07.667: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:09.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:09.668: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:11.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:11.667: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:13.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:13.667: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:15.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:15.771: INFO: Pod pod-with-prestop-http-hook still exists Jun 5 00:00:17.662: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 5 00:00:17.667: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:00:17.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9968" for this suite. • [SLOW TEST:24.206 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1273,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:00:17.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9292 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 5 00:00:17.732: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 5 00:00:17.825: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 5 00:00:19.913: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 5 00:00:21.830: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:00:23.848: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:00:25.829: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:00:27.828: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:00:29.829: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:00:31.830: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:00:33.830: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 5 00:00:33.837: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 5 00:00:38.775: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.111:8080/dial?request=hostname&protocol=http&host=10.244.1.110&port=8080&tries=1'] Namespace:pod-network-test-9292 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:00:38.775: INFO: >>> kubeConfig: /root/.kube/config I0605 00:00:38.807161 7 log.go:172] (0xc0029e7130) (0xc00126aaa0) Create stream I0605 00:00:38.807195 7 log.go:172] (0xc0029e7130) (0xc00126aaa0) Stream added, broadcasting: 1 I0605 00:00:38.809608 7 log.go:172] (0xc0029e7130) Reply frame received for 1 I0605 00:00:38.809656 7 log.go:172] (0xc0029e7130) (0xc0010474a0) Create stream I0605 00:00:38.809674 7 log.go:172] (0xc0029e7130) (0xc0010474a0) Stream added, broadcasting: 3 I0605 00:00:38.810724 7 log.go:172] (0xc0029e7130) Reply frame received for 3 I0605 00:00:38.810774 7 log.go:172] (0xc0029e7130) (0xc00126adc0) Create stream I0605 00:00:38.810797 7 log.go:172] (0xc0029e7130) (0xc00126adc0) Stream added, broadcasting: 5 I0605 00:00:38.811640 7 log.go:172] (0xc0029e7130) Reply frame received for 5 I0605 00:00:38.925939 7 log.go:172] (0xc0029e7130) Data frame received for 3 I0605 00:00:38.925984 7 log.go:172] (0xc0010474a0) (3) Data frame handling I0605 00:00:38.926022 7 log.go:172] (0xc0010474a0) (3) Data frame sent I0605 00:00:38.926539 7 log.go:172] (0xc0029e7130) Data frame received for 5 I0605 00:00:38.926561 7 log.go:172] (0xc00126adc0) (5) Data frame handling I0605 00:00:38.926783 7 log.go:172] (0xc0029e7130) Data frame received for 3 I0605 00:00:38.926808 7 log.go:172] (0xc0010474a0) (3) Data frame handling I0605 00:00:38.928792 7 log.go:172] (0xc0029e7130) Data frame received for 1 I0605 00:00:38.928808 7 log.go:172] (0xc00126aaa0) (1) Data frame handling I0605 00:00:38.928822 7 log.go:172] (0xc00126aaa0) (1) Data frame sent I0605 00:00:38.928834 7 log.go:172] (0xc0029e7130) (0xc00126aaa0) Stream removed, broadcasting: 1 I0605 00:00:38.928854 7 log.go:172] (0xc0029e7130) Go away received I0605 00:00:38.928991 7 log.go:172] (0xc0029e7130) (0xc00126aaa0) Stream removed, broadcasting: 1 I0605 00:00:38.929030 7 log.go:172] (0xc0029e7130) (0xc0010474a0) Stream removed, broadcasting: 3 I0605 00:00:38.929058 7 log.go:172] (0xc0029e7130) (0xc00126adc0) Stream removed, broadcasting: 5 Jun 5 00:00:38.929: INFO: Waiting for responses: map[] Jun 5 00:00:38.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.111:8080/dial?request=hostname&protocol=http&host=10.244.2.27&port=8080&tries=1'] Namespace:pod-network-test-9292 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:00:38.932: INFO: >>> kubeConfig: /root/.kube/config I0605 00:00:38.958502 7 log.go:172] (0xc002950dc0) (0xc000c40780) Create stream I0605 00:00:38.958524 7 log.go:172] (0xc002950dc0) (0xc000c40780) Stream added, broadcasting: 1 I0605 00:00:38.960493 7 log.go:172] (0xc002950dc0) Reply frame received for 1 I0605 00:00:38.960529 7 log.go:172] (0xc002950dc0) (0xc001218e60) Create stream I0605 00:00:38.960542 7 log.go:172] (0xc002950dc0) (0xc001218e60) Stream added, broadcasting: 3 I0605 00:00:38.961741 7 log.go:172] (0xc002950dc0) Reply frame received for 3 I0605 00:00:38.961798 7 log.go:172] (0xc002950dc0) (0xc00126b180) Create stream I0605 00:00:38.961812 7 log.go:172] (0xc002950dc0) (0xc00126b180) Stream added, broadcasting: 5 I0605 00:00:38.962769 7 log.go:172] (0xc002950dc0) Reply frame received for 5 I0605 00:00:39.015512 7 log.go:172] (0xc002950dc0) Data frame received for 3 I0605 00:00:39.015542 7 log.go:172] (0xc001218e60) (3) Data frame handling I0605 00:00:39.015570 7 log.go:172] (0xc001218e60) (3) Data frame sent I0605 00:00:39.016079 7 log.go:172] (0xc002950dc0) Data frame received for 5 I0605 00:00:39.016096 7 log.go:172] (0xc00126b180) (5) Data frame handling I0605 00:00:39.016115 7 log.go:172] (0xc002950dc0) Data frame received for 3 I0605 00:00:39.016123 7 log.go:172] (0xc001218e60) (3) Data frame handling I0605 00:00:39.017636 7 log.go:172] (0xc002950dc0) Data frame received for 1 I0605 00:00:39.017654 7 log.go:172] (0xc000c40780) (1) Data frame handling I0605 00:00:39.017668 7 log.go:172] (0xc000c40780) (1) Data frame sent I0605 00:00:39.017686 7 log.go:172] (0xc002950dc0) (0xc000c40780) Stream removed, broadcasting: 1 I0605 00:00:39.017702 7 log.go:172] (0xc002950dc0) Go away received I0605 00:00:39.017880 7 log.go:172] (0xc002950dc0) (0xc000c40780) Stream removed, broadcasting: 1 I0605 00:00:39.017917 7 log.go:172] (0xc002950dc0) (0xc001218e60) Stream removed, broadcasting: 3 I0605 00:00:39.017929 7 log.go:172] (0xc002950dc0) (0xc00126b180) Stream removed, broadcasting: 5 Jun 5 00:00:39.017: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:00:39.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9292" for this suite. • [SLOW TEST:21.344 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1285,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:00:39.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jun 5 00:00:39.144: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jun 5 00:00:39.178: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 5 00:00:39.178: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jun 5 00:00:39.184: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 5 00:00:39.184: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jun 5 00:00:39.241: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jun 5 00:00:39.241: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jun 5 00:00:46.710: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:00:46.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8275" for this suite. • [SLOW TEST:7.729 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":81,"skipped":1293,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:00:46.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:00:46.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a" in namespace "downward-api-6356" to be "Succeeded or Failed" Jun 5 00:00:46.913: INFO: Pod "downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.516151ms Jun 5 00:00:48.918: INFO: Pod "downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00799562s Jun 5 00:00:51.011: INFO: Pod "downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101100722s Jun 5 00:00:53.096: INFO: Pod "downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.185908481s STEP: Saw pod success Jun 5 00:00:53.096: INFO: Pod "downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a" satisfied condition "Succeeded or Failed" Jun 5 00:00:53.281: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a container client-container: STEP: delete the pod Jun 5 00:00:53.634: INFO: Waiting for pod downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a to disappear Jun 5 00:00:53.640: INFO: Pod downwardapi-volume-1dca9327-369c-4586-b780-7be712491d4a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:00:53.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6356" for this suite. • [SLOW TEST:6.924 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":82,"skipped":1307,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:00:53.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 00:00:55.398: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 00:00:57.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726912055, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726912055, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726912055, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726912055, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:01:00.491: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:01:00.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7558-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:01.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5776" for this suite. STEP: Destroying namespace "webhook-5776-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.049 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":83,"skipped":1318,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:01.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7ffh8 in namespace proxy-1110 I0605 00:01:01.808953 7 runners.go:190] Created replication controller with name: proxy-service-7ffh8, namespace: proxy-1110, replica count: 1 I0605 00:01:02.859299 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:01:03.859496 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:01:04.859721 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:01:05.859981 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0605 00:01:06.860174 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0605 00:01:07.860375 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0605 00:01:08.860568 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0605 00:01:09.860771 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0605 00:01:10.861069 7 runners.go:190] proxy-service-7ffh8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:01:10.864: INFO: setup took 9.094688399s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 5 00:01:10.872: INFO: (0) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 7.351011ms) Jun 5 00:01:10.872: INFO: (0) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 7.347621ms) Jun 5 00:01:10.872: INFO: (0) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 7.508094ms) Jun 5 00:01:10.872: INFO: (0) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 7.604547ms) Jun 5 00:01:10.872: INFO: (0) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 7.562476ms) Jun 5 00:01:10.872: INFO: (0) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 8.127451ms) Jun 5 00:01:10.875: INFO: (0) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 10.432405ms) Jun 5 00:01:10.875: INFO: (0) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 10.565058ms) Jun 5 00:01:10.880: INFO: (0) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 15.849036ms) Jun 5 00:01:10.880: INFO: (0) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 15.765984ms) Jun 5 00:01:10.881: INFO: (0) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 16.341073ms) Jun 5 00:01:10.882: INFO: (0) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 17.166342ms) Jun 5 00:01:10.882: INFO: (0) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 17.075553ms) Jun 5 00:01:10.884: INFO: (0) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test<... (200; 4.291912ms) Jun 5 00:01:10.891: INFO: (1) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.286395ms) Jun 5 00:01:10.891: INFO: (1) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 4.618918ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.119895ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.297264ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.229369ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 5.19744ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 5.381145ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 5.389501ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 5.422163ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 5.795602ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.640866ms) Jun 5 00:01:10.892: INFO: (1) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 5.831416ms) Jun 5 00:01:10.895: INFO: (2) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 2.395457ms) Jun 5 00:01:10.897: INFO: (2) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.038101ms) Jun 5 00:01:10.902: INFO: (2) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test<... (200; 12.337727ms) Jun 5 00:01:10.905: INFO: (2) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 12.559057ms) Jun 5 00:01:10.905: INFO: (2) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 12.834137ms) Jun 5 00:01:10.906: INFO: (2) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 13.531524ms) Jun 5 00:01:10.906: INFO: (2) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 13.616304ms) Jun 5 00:01:10.907: INFO: (2) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 14.163426ms) Jun 5 00:01:10.907: INFO: (2) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 13.794486ms) Jun 5 00:01:10.907: INFO: (2) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 14.089513ms) Jun 5 00:01:10.907: INFO: (2) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 14.228247ms) Jun 5 00:01:10.907: INFO: (2) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 13.912439ms) Jun 5 00:01:10.907: INFO: (2) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 14.155652ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.282553ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 5.250483ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.240465ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.253682ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 5.389812ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 5.337858ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 5.370893ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 5.511509ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 5.43775ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 5.476899ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.508504ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 5.541127ms) Jun 5 00:01:10.912: INFO: (3) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 5.647916ms) Jun 5 00:01:10.913: INFO: (3) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 5.511996ms) Jun 5 00:01:10.918: INFO: (4) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 5.524553ms) Jun 5 00:01:10.918: INFO: (4) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.60567ms) Jun 5 00:01:10.918: INFO: (4) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.55568ms) Jun 5 00:01:10.918: INFO: (4) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.537424ms) Jun 5 00:01:10.918: INFO: (4) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 5.678982ms) Jun 5 00:01:10.919: INFO: (4) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 6.054883ms) Jun 5 00:01:10.919: INFO: (4) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 6.624949ms) Jun 5 00:01:10.920: INFO: (4) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 6.699681ms) Jun 5 00:01:10.920: INFO: (4) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 6.729621ms) Jun 5 00:01:10.920: INFO: (4) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 7.569935ms) Jun 5 00:01:10.920: INFO: (4) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 7.600888ms) Jun 5 00:01:10.920: INFO: (4) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test<... (200; 4.465315ms) Jun 5 00:01:10.925: INFO: (5) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.475802ms) Jun 5 00:01:10.925: INFO: (5) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 4.438435ms) Jun 5 00:01:10.925: INFO: (5) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.44813ms) Jun 5 00:01:10.925: INFO: (5) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.620665ms) Jun 5 00:01:10.925: INFO: (5) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.496554ms) Jun 5 00:01:10.925: INFO: (5) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 4.572716ms) Jun 5 00:01:10.926: INFO: (5) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 4.553323ms) Jun 5 00:01:10.926: INFO: (5) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 4.599328ms) Jun 5 00:01:10.927: INFO: (5) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 5.632884ms) Jun 5 00:01:10.927: INFO: (5) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 5.786402ms) Jun 5 00:01:10.927: INFO: (5) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.897415ms) Jun 5 00:01:10.927: INFO: (5) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.767291ms) Jun 5 00:01:10.927: INFO: (5) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.946732ms) Jun 5 00:01:10.929: INFO: (6) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 2.471906ms) Jun 5 00:01:10.930: INFO: (6) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 2.716359ms) Jun 5 00:01:10.931: INFO: (6) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 3.97255ms) Jun 5 00:01:10.932: INFO: (6) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 4.812278ms) Jun 5 00:01:10.932: INFO: (6) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 4.762195ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.553168ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.554501ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.564533ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 5.685841ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 5.853251ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 5.930915ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.890375ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 5.886519ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 6.006096ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 5.973207ms) Jun 5 00:01:10.933: INFO: (6) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: ... (200; 3.326724ms) Jun 5 00:01:10.937: INFO: (7) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 2.875314ms) Jun 5 00:01:10.937: INFO: (7) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 3.473321ms) Jun 5 00:01:10.937: INFO: (7) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 3.938415ms) Jun 5 00:01:10.937: INFO: (7) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 3.693318ms) Jun 5 00:01:10.937: INFO: (7) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.320379ms) Jun 5 00:01:10.937: INFO: (7) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 4.435844ms) Jun 5 00:01:10.938: INFO: (7) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: ... (200; 4.364256ms) Jun 5 00:01:10.943: INFO: (8) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 4.439167ms) Jun 5 00:01:10.943: INFO: (8) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 4.477111ms) Jun 5 00:01:10.943: INFO: (8) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 4.480364ms) Jun 5 00:01:10.944: INFO: (8) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.988403ms) Jun 5 00:01:10.944: INFO: (8) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.987222ms) Jun 5 00:01:10.944: INFO: (8) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 5.036558ms) Jun 5 00:01:10.944: INFO: (8) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 5.095285ms) Jun 5 00:01:10.945: INFO: (8) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 5.730268ms) Jun 5 00:01:10.945: INFO: (8) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 5.763244ms) Jun 5 00:01:10.945: INFO: (8) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.79097ms) Jun 5 00:01:10.945: INFO: (8) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 5.7577ms) Jun 5 00:01:10.948: INFO: (9) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 3.635085ms) Jun 5 00:01:10.949: INFO: (9) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 3.994343ms) Jun 5 00:01:10.949: INFO: (9) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 4.168959ms) Jun 5 00:01:10.950: INFO: (9) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 4.863481ms) Jun 5 00:01:10.950: INFO: (9) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 4.919686ms) Jun 5 00:01:10.950: INFO: (9) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.105953ms) Jun 5 00:01:10.950: INFO: (9) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 4.993087ms) Jun 5 00:01:10.950: INFO: (9) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 4.879475ms) Jun 5 00:01:10.950: INFO: (9) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 4.912616ms) Jun 5 00:01:10.950: INFO: (9) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.040129ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 3.810674ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 3.864233ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 3.897449ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 4.016164ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 4.184353ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.182558ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 4.376296ms) Jun 5 00:01:10.954: INFO: (10) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 4.449892ms) Jun 5 00:01:10.955: INFO: (10) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 4.594674ms) Jun 5 00:01:10.955: INFO: (10) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 4.748892ms) Jun 5 00:01:10.955: INFO: (10) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 4.684235ms) Jun 5 00:01:10.955: INFO: (10) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.765026ms) Jun 5 00:01:10.955: INFO: (10) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 4.810548ms) Jun 5 00:01:10.955: INFO: (10) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 3.337375ms) Jun 5 00:01:10.960: INFO: (11) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 3.762259ms) Jun 5 00:01:10.960: INFO: (11) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 3.865024ms) Jun 5 00:01:10.960: INFO: (11) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 4.277472ms) Jun 5 00:01:10.960: INFO: (11) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 4.325092ms) Jun 5 00:01:10.960: INFO: (11) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 4.477482ms) Jun 5 00:01:10.960: INFO: (11) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 4.61258ms) Jun 5 00:01:10.961: INFO: (11) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 4.680893ms) Jun 5 00:01:10.961: INFO: (11) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 4.709738ms) Jun 5 00:01:10.961: INFO: (11) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 4.711323ms) Jun 5 00:01:10.961: INFO: (11) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 4.959379ms) Jun 5 00:01:10.961: INFO: (11) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.030644ms) Jun 5 00:01:10.961: INFO: (11) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.031901ms) Jun 5 00:01:10.961: INFO: (11) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.011397ms) Jun 5 00:01:10.964: INFO: (12) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 3.23691ms) Jun 5 00:01:10.964: INFO: (12) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 3.345474ms) Jun 5 00:01:10.964: INFO: (12) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 3.353271ms) Jun 5 00:01:10.965: INFO: (12) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 3.55459ms) Jun 5 00:01:10.965: INFO: (12) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 4.128033ms) Jun 5 00:01:10.965: INFO: (12) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 4.195994ms) Jun 5 00:01:10.965: INFO: (12) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.21727ms) Jun 5 00:01:10.965: INFO: (12) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 4.867676ms) Jun 5 00:01:10.966: INFO: (12) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 4.968824ms) Jun 5 00:01:10.966: INFO: (12) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 4.946728ms) Jun 5 00:01:10.966: INFO: (12) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 4.977729ms) Jun 5 00:01:10.966: INFO: (12) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.032461ms) Jun 5 00:01:10.966: INFO: (12) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 4.999078ms) Jun 5 00:01:10.966: INFO: (12) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.050576ms) Jun 5 00:01:10.966: INFO: (12) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 4.939541ms) Jun 5 00:01:10.969: INFO: (13) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 3.430879ms) Jun 5 00:01:10.970: INFO: (13) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 3.362477ms) Jun 5 00:01:10.970: INFO: (13) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 3.822751ms) Jun 5 00:01:10.970: INFO: (13) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 4.342939ms) Jun 5 00:01:10.970: INFO: (13) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 4.314641ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 4.433728ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 4.858452ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 4.965906ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.89832ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.937359ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 5.039787ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 4.994453ms) Jun 5 00:01:10.971: INFO: (13) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 3.370121ms) Jun 5 00:01:10.975: INFO: (14) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 3.417063ms) Jun 5 00:01:10.975: INFO: (14) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 3.426412ms) Jun 5 00:01:10.975: INFO: (14) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test<... (200; 7.464703ms) Jun 5 00:01:10.979: INFO: (14) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 7.555218ms) Jun 5 00:01:10.979: INFO: (14) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 7.464956ms) Jun 5 00:01:10.979: INFO: (14) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 7.485526ms) Jun 5 00:01:10.979: INFO: (14) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 7.438933ms) Jun 5 00:01:10.979: INFO: (14) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 7.53407ms) Jun 5 00:01:10.979: INFO: (14) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 7.579166ms) Jun 5 00:01:10.981: INFO: (15) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 1.977255ms) Jun 5 00:01:10.981: INFO: (15) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 2.210362ms) Jun 5 00:01:10.987: INFO: (15) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: ... (200; 12.900246ms) Jun 5 00:01:10.992: INFO: (15) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 12.986243ms) Jun 5 00:01:10.992: INFO: (15) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 13.049533ms) Jun 5 00:01:10.992: INFO: (15) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 13.055023ms) Jun 5 00:01:10.993: INFO: (15) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 14.10534ms) Jun 5 00:01:10.993: INFO: (15) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 14.133252ms) Jun 5 00:01:10.994: INFO: (15) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 14.533586ms) Jun 5 00:01:10.994: INFO: (15) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 14.527887ms) Jun 5 00:01:10.994: INFO: (15) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 14.568888ms) Jun 5 00:01:10.994: INFO: (15) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 14.587781ms) Jun 5 00:01:10.994: INFO: (15) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 14.686432ms) Jun 5 00:01:10.994: INFO: (15) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 14.792446ms) Jun 5 00:01:10.998: INFO: (16) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 3.995438ms) Jun 5 00:01:10.998: INFO: (16) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 4.020627ms) Jun 5 00:01:10.998: INFO: (16) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 4.010836ms) Jun 5 00:01:10.998: INFO: (16) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.160358ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.087007ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 5.289962ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.31947ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 5.393203ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.352624ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 5.366195ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 5.454891ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 5.427254ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 5.495445ms) Jun 5 00:01:10.999: INFO: (16) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: ... (200; 4.092848ms) Jun 5 00:01:11.004: INFO: (17) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 4.0658ms) Jun 5 00:01:11.004: INFO: (17) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.119956ms) Jun 5 00:01:11.004: INFO: (17) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.058636ms) Jun 5 00:01:11.004: INFO: (17) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.110449ms) Jun 5 00:01:11.004: INFO: (17) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 4.125152ms) Jun 5 00:01:11.004: INFO: (17) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 4.174907ms) Jun 5 00:01:11.004: INFO: (17) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 4.180586ms) Jun 5 00:01:11.005: INFO: (17) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.08984ms) Jun 5 00:01:11.005: INFO: (17) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.094015ms) Jun 5 00:01:11.005: INFO: (17) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname2/proxy/: bar (200; 5.113089ms) Jun 5 00:01:11.005: INFO: (17) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 5.246598ms) Jun 5 00:01:11.005: INFO: (17) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 5.250515ms) Jun 5 00:01:11.005: INFO: (17) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.427364ms) Jun 5 00:01:11.007: INFO: (18) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 1.855803ms) Jun 5 00:01:11.008: INFO: (18) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:462/proxy/: tls qux (200; 2.515029ms) Jun 5 00:01:11.008: INFO: (18) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 2.876106ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 4.386951ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 4.492282ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs/proxy/: test (200; 4.557575ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 4.762504ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:162/proxy/: bar (200; 4.705094ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 4.757968ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:160/proxy/: foo (200; 4.780727ms) Jun 5 00:01:11.010: INFO: (18) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:443/proxy/: test (200; 5.627096ms) Jun 5 00:01:11.017: INFO: (19) /api/v1/namespaces/proxy-1110/pods/proxy-service-7ffh8-mlkhs:1080/proxy/: test<... (200; 5.805485ms) Jun 5 00:01:11.017: INFO: (19) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname1/proxy/: tls baz (200; 5.786683ms) Jun 5 00:01:11.017: INFO: (19) /api/v1/namespaces/proxy-1110/pods/http:proxy-service-7ffh8-mlkhs:1080/proxy/: ... (200; 5.816461ms) Jun 5 00:01:11.017: INFO: (19) /api/v1/namespaces/proxy-1110/services/proxy-service-7ffh8:portname1/proxy/: foo (200; 5.834803ms) Jun 5 00:01:11.017: INFO: (19) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname2/proxy/: bar (200; 5.852848ms) Jun 5 00:01:11.018: INFO: (19) /api/v1/namespaces/proxy-1110/services/https:proxy-service-7ffh8:tlsportname2/proxy/: tls qux (200; 5.954938ms) Jun 5 00:01:11.018: INFO: (19) /api/v1/namespaces/proxy-1110/services/http:proxy-service-7ffh8:portname1/proxy/: foo (200; 6.053321ms) Jun 5 00:01:11.018: INFO: (19) /api/v1/namespaces/proxy-1110/pods/https:proxy-service-7ffh8-mlkhs:460/proxy/: tls baz (200; 5.999604ms) STEP: deleting ReplicationController proxy-service-7ffh8 in namespace proxy-1110, will wait for the garbage collector to delete the pods Jun 5 00:01:11.087: INFO: Deleting ReplicationController proxy-service-7ffh8 took: 18.065995ms Jun 5 00:01:11.188: INFO: Terminating ReplicationController proxy-service-7ffh8 pods took: 100.287711ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:13.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1110" for this suite. • [SLOW TEST:11.666 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":84,"skipped":1320,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:13.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-9f40af28-bb8e-480a-95c6-76b910122fbb STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:17.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2667" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1334,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:17.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 5 00:01:17.615: INFO: >>> kubeConfig: /root/.kube/config Jun 5 00:01:20.560: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:31.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-281" for this suite. • [SLOW TEST:13.820 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":86,"skipped":1356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:31.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 5 00:01:31.449: INFO: Waiting up to 5m0s for pod "pod-a80a085f-67c0-4fc5-9e48-8436ff99731c" in namespace "emptydir-5841" to be "Succeeded or Failed" Jun 5 00:01:31.475: INFO: Pod "pod-a80a085f-67c0-4fc5-9e48-8436ff99731c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.711643ms Jun 5 00:01:33.485: INFO: Pod "pod-a80a085f-67c0-4fc5-9e48-8436ff99731c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035560716s Jun 5 00:01:35.489: INFO: Pod "pod-a80a085f-67c0-4fc5-9e48-8436ff99731c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039094518s STEP: Saw pod success Jun 5 00:01:35.489: INFO: Pod "pod-a80a085f-67c0-4fc5-9e48-8436ff99731c" satisfied condition "Succeeded or Failed" Jun 5 00:01:35.504: INFO: Trying to get logs from node latest-worker2 pod pod-a80a085f-67c0-4fc5-9e48-8436ff99731c container test-container: STEP: delete the pod Jun 5 00:01:35.536: INFO: Waiting for pod pod-a80a085f-67c0-4fc5-9e48-8436ff99731c to disappear Jun 5 00:01:35.546: INFO: Pod pod-a80a085f-67c0-4fc5-9e48-8436ff99731c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:35.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5841" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":87,"skipped":1396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:35.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-d39e4fb0-ad1b-42d4-951f-941fbfdf3356 STEP: Creating configMap with name cm-test-opt-upd-eba2737b-fb08-4b2d-913a-f426aadebf4e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d39e4fb0-ad1b-42d4-951f-941fbfdf3356 STEP: Updating configmap cm-test-opt-upd-eba2737b-fb08-4b2d-913a-f426aadebf4e STEP: Creating configMap with name cm-test-opt-create-52aed741-f314-47cc-9d0e-be119c833ac8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:43.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4959" for this suite. • [SLOW TEST:8.303 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1463,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:43.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 5 00:01:44.008: INFO: Waiting up to 5m0s for pod "pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f" in namespace "emptydir-5620" to be "Succeeded or Failed" Jun 5 00:01:44.095: INFO: Pod "pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f": Phase="Pending", Reason="", readiness=false. Elapsed: 86.877646ms Jun 5 00:01:46.099: INFO: Pod "pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090898232s Jun 5 00:01:48.103: INFO: Pod "pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095520365s STEP: Saw pod success Jun 5 00:01:48.104: INFO: Pod "pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f" satisfied condition "Succeeded or Failed" Jun 5 00:01:48.107: INFO: Trying to get logs from node latest-worker pod pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f container test-container: STEP: delete the pod Jun 5 00:01:48.126: INFO: Waiting for pod pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f to disappear Jun 5 00:01:48.128: INFO: Pod pod-a0ee12b2-78e3-450b-a7e0-08ae6f4a654f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:48.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5620" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1464,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:48.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 5 00:01:48.262: INFO: Waiting up to 5m0s for pod "pod-c7982587-fa7b-449f-8bb5-1636bf08e249" in namespace "emptydir-8736" to be "Succeeded or Failed" Jun 5 00:01:48.270: INFO: Pod "pod-c7982587-fa7b-449f-8bb5-1636bf08e249": Phase="Pending", Reason="", readiness=false. Elapsed: 7.73986ms Jun 5 00:01:50.275: INFO: Pod "pod-c7982587-fa7b-449f-8bb5-1636bf08e249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012782714s Jun 5 00:01:52.281: INFO: Pod "pod-c7982587-fa7b-449f-8bb5-1636bf08e249": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01857679s Jun 5 00:01:54.285: INFO: Pod "pod-c7982587-fa7b-449f-8bb5-1636bf08e249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023137898s STEP: Saw pod success Jun 5 00:01:54.285: INFO: Pod "pod-c7982587-fa7b-449f-8bb5-1636bf08e249" satisfied condition "Succeeded or Failed" Jun 5 00:01:54.288: INFO: Trying to get logs from node latest-worker pod pod-c7982587-fa7b-449f-8bb5-1636bf08e249 container test-container: STEP: delete the pod Jun 5 00:01:54.348: INFO: Waiting for pod pod-c7982587-fa7b-449f-8bb5-1636bf08e249 to disappear Jun 5 00:01:54.355: INFO: Pod pod-c7982587-fa7b-449f-8bb5-1636bf08e249 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:01:54.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8736" for this suite. • [SLOW TEST:6.182 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1465,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:01:54.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-030ea14e-a4cb-4317-aa60-d5d3f389d9b9 STEP: Creating a pod to test consume configMaps Jun 5 00:01:54.420: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769" in namespace "projected-2662" to be "Succeeded or Failed" Jun 5 00:01:54.436: INFO: Pod "pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769": Phase="Pending", Reason="", readiness=false. Elapsed: 15.812876ms Jun 5 00:01:56.508: INFO: Pod "pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088241993s Jun 5 00:01:59.148: INFO: Pod "pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769": Phase="Pending", Reason="", readiness=false. Elapsed: 4.727348648s Jun 5 00:02:01.151: INFO: Pod "pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.730460186s STEP: Saw pod success Jun 5 00:02:01.151: INFO: Pod "pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769" satisfied condition "Succeeded or Failed" Jun 5 00:02:01.153: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769 container projected-configmap-volume-test: STEP: delete the pod Jun 5 00:02:01.273: INFO: Waiting for pod pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769 to disappear Jun 5 00:02:01.277: INFO: Pod pod-projected-configmaps-354bb63f-67fd-4d23-8800-600a4d3a7769 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:02:01.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2662" for this suite. • [SLOW TEST:6.922 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1467,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:02:01.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-f64fba69-0971-4878-a6a6-dd3c10d4b1b1 STEP: Creating secret with name s-test-opt-upd-2b2dc03d-8b9b-4779-ab72-b18a71e67083 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f64fba69-0971-4878-a6a6-dd3c10d4b1b1 STEP: Updating secret s-test-opt-upd-2b2dc03d-8b9b-4779-ab72-b18a71e67083 STEP: Creating secret with name s-test-opt-create-7a02dd54-9bf9-4dec-b666-26fa5024941b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:02:09.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-321" for this suite. • [SLOW TEST:8.390 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1482,"failed":0} [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:02:09.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Jun 5 00:02:10.338: INFO: created pod pod-service-account-defaultsa Jun 5 00:02:10.338: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 5 00:02:10.344: INFO: created pod pod-service-account-mountsa Jun 5 00:02:10.344: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 5 00:02:10.373: INFO: created pod pod-service-account-nomountsa Jun 5 00:02:10.373: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 5 00:02:10.443: INFO: created pod pod-service-account-defaultsa-mountspec Jun 5 00:02:10.443: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 5 00:02:10.449: INFO: created pod pod-service-account-mountsa-mountspec Jun 5 00:02:10.449: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 5 00:02:10.482: INFO: created pod pod-service-account-nomountsa-mountspec Jun 5 00:02:10.482: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 5 00:02:10.504: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 5 00:02:10.504: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 5 00:02:10.529: INFO: created pod pod-service-account-mountsa-nomountspec Jun 5 00:02:10.529: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 5 00:02:10.599: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 5 00:02:10.599: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:02:10.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8473" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":93,"skipped":1482,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:02:10.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:02:12.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7459" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":94,"skipped":1490,"failed":0} SS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:02:13.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5421 STEP: creating service affinity-clusterip-transition in namespace services-5421 STEP: creating replication controller affinity-clusterip-transition in namespace services-5421 I0605 00:02:14.611869 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5421, replica count: 3 I0605 00:02:17.662258 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:02:20.662673 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:02:23.662897 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:02:26.663353 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:02:26.870: INFO: Creating new exec pod Jun 5 00:02:33.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5421 execpod-affinityr7n5z -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jun 5 00:02:36.926: INFO: stderr: "I0605 00:02:36.820835 984 log.go:172] (0xc000890790) (0xc0006a0fa0) Create stream\nI0605 00:02:36.820912 984 log.go:172] (0xc000890790) (0xc0006a0fa0) Stream added, broadcasting: 1\nI0605 00:02:36.824650 984 log.go:172] (0xc000890790) Reply frame received for 1\nI0605 00:02:36.824696 984 log.go:172] (0xc000890790) (0xc000658d20) Create stream\nI0605 00:02:36.824714 984 log.go:172] (0xc000890790) (0xc000658d20) Stream added, broadcasting: 3\nI0605 00:02:36.826070 984 log.go:172] (0xc000890790) Reply frame received for 3\nI0605 00:02:36.826110 984 log.go:172] (0xc000890790) (0xc0006505a0) Create stream\nI0605 00:02:36.826127 984 log.go:172] (0xc000890790) (0xc0006505a0) Stream added, broadcasting: 5\nI0605 00:02:36.827001 984 log.go:172] (0xc000890790) Reply frame received for 5\nI0605 00:02:36.918174 984 log.go:172] (0xc000890790) Data frame received for 5\nI0605 00:02:36.918199 984 log.go:172] (0xc0006505a0) (5) Data frame handling\nI0605 00:02:36.918213 984 log.go:172] (0xc0006505a0) (5) Data frame sent\nI0605 00:02:36.918227 984 log.go:172] (0xc000890790) Data frame received for 5\nI0605 00:02:36.918242 984 log.go:172] (0xc0006505a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0605 00:02:36.918260 984 log.go:172] (0xc0006505a0) (5) Data frame sent\nI0605 00:02:36.918503 984 log.go:172] (0xc000890790) Data frame received for 5\nI0605 00:02:36.918515 984 log.go:172] (0xc0006505a0) (5) Data frame handling\nI0605 00:02:36.918712 984 log.go:172] (0xc000890790) Data frame received for 3\nI0605 00:02:36.918732 984 log.go:172] (0xc000658d20) (3) Data frame handling\nI0605 00:02:36.920253 984 log.go:172] (0xc000890790) Data frame received for 1\nI0605 00:02:36.920295 984 log.go:172] (0xc0006a0fa0) (1) Data frame handling\nI0605 00:02:36.920312 984 log.go:172] (0xc0006a0fa0) (1) Data frame sent\nI0605 00:02:36.920336 984 log.go:172] (0xc000890790) (0xc0006a0fa0) Stream removed, broadcasting: 1\nI0605 00:02:36.920359 984 log.go:172] (0xc000890790) Go away received\nI0605 00:02:36.920768 984 log.go:172] (0xc000890790) (0xc0006a0fa0) Stream removed, broadcasting: 1\nI0605 00:02:36.920787 984 log.go:172] (0xc000890790) (0xc000658d20) Stream removed, broadcasting: 3\nI0605 00:02:36.920794 984 log.go:172] (0xc000890790) (0xc0006505a0) Stream removed, broadcasting: 5\n" Jun 5 00:02:36.926: INFO: stdout: "" Jun 5 00:02:36.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5421 execpod-affinityr7n5z -- /bin/sh -x -c nc -zv -t -w 2 10.104.212.139 80' Jun 5 00:02:37.174: INFO: stderr: "I0605 00:02:37.086351 1015 log.go:172] (0xc000becfd0) (0xc00053fe00) Create stream\nI0605 00:02:37.086443 1015 log.go:172] (0xc000becfd0) (0xc00053fe00) Stream added, broadcasting: 1\nI0605 00:02:37.089796 1015 log.go:172] (0xc000becfd0) Reply frame received for 1\nI0605 00:02:37.089952 1015 log.go:172] (0xc000becfd0) (0xc000372960) Create stream\nI0605 00:02:37.089977 1015 log.go:172] (0xc000becfd0) (0xc000372960) Stream added, broadcasting: 3\nI0605 00:02:37.091176 1015 log.go:172] (0xc000becfd0) Reply frame received for 3\nI0605 00:02:37.091235 1015 log.go:172] (0xc000becfd0) (0xc0002646e0) Create stream\nI0605 00:02:37.091250 1015 log.go:172] (0xc000becfd0) (0xc0002646e0) Stream added, broadcasting: 5\nI0605 00:02:37.092437 1015 log.go:172] (0xc000becfd0) Reply frame received for 5\nI0605 00:02:37.165847 1015 log.go:172] (0xc000becfd0) Data frame received for 3\nI0605 00:02:37.165901 1015 log.go:172] (0xc000becfd0) Data frame received for 5\nI0605 00:02:37.165934 1015 log.go:172] (0xc0002646e0) (5) Data frame handling\nI0605 00:02:37.165954 1015 log.go:172] (0xc0002646e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.212.139 80\nConnection to 10.104.212.139 80 port [tcp/http] succeeded!\nI0605 00:02:37.165978 1015 log.go:172] (0xc000372960) (3) Data frame handling\nI0605 00:02:37.166116 1015 log.go:172] (0xc000becfd0) Data frame received for 5\nI0605 00:02:37.166239 1015 log.go:172] (0xc0002646e0) (5) Data frame handling\nI0605 00:02:37.167690 1015 log.go:172] (0xc000becfd0) Data frame received for 1\nI0605 00:02:37.167709 1015 log.go:172] (0xc00053fe00) (1) Data frame handling\nI0605 00:02:37.167719 1015 log.go:172] (0xc00053fe00) (1) Data frame sent\nI0605 00:02:37.167730 1015 log.go:172] (0xc000becfd0) (0xc00053fe00) Stream removed, broadcasting: 1\nI0605 00:02:37.168057 1015 log.go:172] (0xc000becfd0) (0xc00053fe00) Stream removed, broadcasting: 1\nI0605 00:02:37.168074 1015 log.go:172] (0xc000becfd0) (0xc000372960) Stream removed, broadcasting: 3\nI0605 00:02:37.168084 1015 log.go:172] (0xc000becfd0) (0xc0002646e0) Stream removed, broadcasting: 5\n" Jun 5 00:02:37.175: INFO: stdout: "" Jun 5 00:02:37.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5421 execpod-affinityr7n5z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.212.139:80/ ; done' Jun 5 00:02:37.623: INFO: stderr: "I0605 00:02:37.341736 1035 log.go:172] (0xc00098a630) (0xc00048de00) Create stream\nI0605 00:02:37.341790 1035 log.go:172] (0xc00098a630) (0xc00048de00) Stream added, broadcasting: 1\nI0605 00:02:37.343868 1035 log.go:172] (0xc00098a630) Reply frame received for 1\nI0605 00:02:37.343904 1035 log.go:172] (0xc00098a630) (0xc0002500a0) Create stream\nI0605 00:02:37.343919 1035 log.go:172] (0xc00098a630) (0xc0002500a0) Stream added, broadcasting: 3\nI0605 00:02:37.344572 1035 log.go:172] (0xc00098a630) Reply frame received for 3\nI0605 00:02:37.344600 1035 log.go:172] (0xc00098a630) (0xc0007a4960) Create stream\nI0605 00:02:37.344610 1035 log.go:172] (0xc00098a630) (0xc0007a4960) Stream added, broadcasting: 5\nI0605 00:02:37.345425 1035 log.go:172] (0xc00098a630) Reply frame received for 5\nI0605 00:02:37.535994 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.536033 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.536049 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.536081 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.536113 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.536154 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.543441 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.543478 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.543502 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.544168 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.544185 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.544199 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.544209 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.544221 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.544251 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.544287 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.544302 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.544311 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.552001 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.552037 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.552062 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.552725 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.552765 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.552783 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.552801 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.552812 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.552827 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.555921 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.555951 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.555980 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.556238 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.556268 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.556284 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.556311 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.556320 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.556332 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.563488 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.563527 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.563561 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.563956 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.563965 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.563971 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.563983 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.563997 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.564013 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.564024 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.564031 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.564050 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.568685 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.568713 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.568863 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.569272 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.569308 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.569324 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.569344 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.569374 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.569397 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.572618 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.572644 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.572664 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.572972 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.572999 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.573009 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.573020 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.573027 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.573034 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.576104 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.576132 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.576271 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.576559 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.576591 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.576619 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.576638 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.576687 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.576715 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.580182 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.580201 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.580221 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.580673 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.580698 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.580708 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.580722 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.580730 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.580738 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.585669 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.585699 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.585726 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.585988 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.586040 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.586057 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.586077 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.586097 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.586123 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.586136 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.586145 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.586163 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.589857 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.589880 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.589895 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.590151 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.590182 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.590196 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.590210 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.590219 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.590229 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.594253 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.594274 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.594292 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.594659 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.594684 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.594694 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.594708 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.594721 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.594729 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.597533 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.597560 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.597665 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.597746 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.597766 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.597776 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.597789 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.597797 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.597805 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.601767 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.601805 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.601839 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.602055 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.602087 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.602105 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.602122 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.602137 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.602160 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.605567 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.605599 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.605629 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.605997 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.606026 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.606056 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.606107 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.606135 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.606154 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.606178 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.606204 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.606245 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\nI0605 00:02:37.609688 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.609726 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.609750 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.610355 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.610381 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.610393 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.610408 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.610416 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.610426 1035 log.go:172] (0xc0007a4960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.614072 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.614094 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.614106 1035 log.go:172] (0xc0002500a0) (3) Data frame sent\nI0605 00:02:37.614623 1035 log.go:172] (0xc00098a630) Data frame received for 5\nI0605 00:02:37.614663 1035 log.go:172] (0xc0007a4960) (5) Data frame handling\nI0605 00:02:37.614842 1035 log.go:172] (0xc00098a630) Data frame received for 3\nI0605 00:02:37.614868 1035 log.go:172] (0xc0002500a0) (3) Data frame handling\nI0605 00:02:37.616319 1035 log.go:172] (0xc00098a630) Data frame received for 1\nI0605 00:02:37.616345 1035 log.go:172] (0xc00048de00) (1) Data frame handling\nI0605 00:02:37.616376 1035 log.go:172] (0xc00048de00) (1) Data frame sent\nI0605 00:02:37.616414 1035 log.go:172] (0xc00098a630) (0xc00048de00) Stream removed, broadcasting: 1\nI0605 00:02:37.616586 1035 log.go:172] (0xc00098a630) Go away received\nI0605 00:02:37.616888 1035 log.go:172] (0xc00098a630) (0xc00048de00) Stream removed, broadcasting: 1\nI0605 00:02:37.616918 1035 log.go:172] (0xc00098a630) (0xc0002500a0) Stream removed, broadcasting: 3\nI0605 00:02:37.616933 1035 log.go:172] (0xc00098a630) (0xc0007a4960) Stream removed, broadcasting: 5\n" Jun 5 00:02:37.623: INFO: stdout: "\naffinity-clusterip-transition-xgxrg\naffinity-clusterip-transition-8k6l2\naffinity-clusterip-transition-8k6l2\naffinity-clusterip-transition-xgxrg\naffinity-clusterip-transition-8k6l2\naffinity-clusterip-transition-8k6l2\naffinity-clusterip-transition-xgxrg\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-xgxrg\naffinity-clusterip-transition-xgxrg\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-8k6l2\naffinity-clusterip-transition-8k6l2\naffinity-clusterip-transition-8k6l2\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd" Jun 5 00:02:37.623: INFO: Received response from host: Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-xgxrg Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-8k6l2 Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-8k6l2 Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-xgxrg Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-8k6l2 Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-8k6l2 Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-xgxrg Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-xgxrg Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-xgxrg Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-8k6l2 Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-8k6l2 Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-8k6l2 Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.623: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5421 execpod-affinityr7n5z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.212.139:80/ ; done' Jun 5 00:02:37.962: INFO: stderr: "I0605 00:02:37.775828 1049 log.go:172] (0xc000a6f600) (0xc000b70320) Create stream\nI0605 00:02:37.775887 1049 log.go:172] (0xc000a6f600) (0xc000b70320) Stream added, broadcasting: 1\nI0605 00:02:37.780165 1049 log.go:172] (0xc000a6f600) Reply frame received for 1\nI0605 00:02:37.780202 1049 log.go:172] (0xc000a6f600) (0xc000800fa0) Create stream\nI0605 00:02:37.780218 1049 log.go:172] (0xc000a6f600) (0xc000800fa0) Stream added, broadcasting: 3\nI0605 00:02:37.780985 1049 log.go:172] (0xc000a6f600) Reply frame received for 3\nI0605 00:02:37.781023 1049 log.go:172] (0xc000a6f600) (0xc0005c6dc0) Create stream\nI0605 00:02:37.781039 1049 log.go:172] (0xc000a6f600) (0xc0005c6dc0) Stream added, broadcasting: 5\nI0605 00:02:37.782348 1049 log.go:172] (0xc000a6f600) Reply frame received for 5\nI0605 00:02:37.873362 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.873395 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.873407 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.873428 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.873437 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.873446 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.876629 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.876652 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.876680 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.876882 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.876894 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.876901 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.877014 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.877032 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.877054 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.882713 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.882733 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.882748 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.883339 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.883368 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.883381 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.883401 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.883421 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.883442 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.887116 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.887143 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.887153 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ I0605 00:02:37.887188 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.887229 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.887261 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.887286 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.887304 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.887316 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.887333 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.887348 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.887360 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.892036 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.892056 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.892066 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.892760 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.892794 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.892867 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.892899 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.892941 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.893004 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.896665 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.896689 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.896701 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.897429 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.897458 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.897471 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.897490 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.897506 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.897519 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.900885 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.900910 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.900929 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.901778 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.901811 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.901824 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.901846 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.901855 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.901866 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.906073 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.906115 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.906160 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.906645 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.906667 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.906692 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.906705 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.906721 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.906731 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.911030 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.911063 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.911085 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.911294 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.911312 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.911329 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.911360 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.911376 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.911389 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\nI0605 00:02:37.915199 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.915223 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.915250 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.915735 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.915757 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.915780 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.915970 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.915993 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.916018 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.921931 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.921951 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.921970 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.922912 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.922944 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.922961 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.922980 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.922991 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.923016 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.926917 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.926948 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.926976 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.927414 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.927445 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.927459 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.927480 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.927490 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.927503 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.931310 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.931339 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.931370 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.931578 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.931599 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.931619 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.931884 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.931906 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.931932 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.936515 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.936554 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.936596 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.936836 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.936856 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.936868 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.936938 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.936957 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.936987 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.941457 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.941472 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.941479 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.942230 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.942247 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.942267 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.942622 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.942647 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.942661 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.946229 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.946251 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.946270 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.946892 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.946905 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.946912 1049 log.go:172] (0xc0005c6dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.212.139:80/\nI0605 00:02:37.946925 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.946947 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.946964 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.952046 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.952071 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.952095 1049 log.go:172] (0xc000800fa0) (3) Data frame sent\nI0605 00:02:37.952832 1049 log.go:172] (0xc000a6f600) Data frame received for 5\nI0605 00:02:37.952849 1049 log.go:172] (0xc0005c6dc0) (5) Data frame handling\nI0605 00:02:37.953036 1049 log.go:172] (0xc000a6f600) Data frame received for 3\nI0605 00:02:37.953053 1049 log.go:172] (0xc000800fa0) (3) Data frame handling\nI0605 00:02:37.954479 1049 log.go:172] (0xc000a6f600) Data frame received for 1\nI0605 00:02:37.954499 1049 log.go:172] (0xc000b70320) (1) Data frame handling\nI0605 00:02:37.954513 1049 log.go:172] (0xc000b70320) (1) Data frame sent\nI0605 00:02:37.954532 1049 log.go:172] (0xc000a6f600) (0xc000b70320) Stream removed, broadcasting: 1\nI0605 00:02:37.954542 1049 log.go:172] (0xc000a6f600) Go away received\nI0605 00:02:37.955036 1049 log.go:172] (0xc000a6f600) (0xc000b70320) Stream removed, broadcasting: 1\nI0605 00:02:37.955057 1049 log.go:172] (0xc000a6f600) (0xc000800fa0) Stream removed, broadcasting: 3\nI0605 00:02:37.955068 1049 log.go:172] (0xc000a6f600) (0xc0005c6dc0) Stream removed, broadcasting: 5\n" Jun 5 00:02:37.962: INFO: stdout: "\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd\naffinity-clusterip-transition-qvxjd" Jun 5 00:02:37.962: INFO: Received response from host: Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Received response from host: affinity-clusterip-transition-qvxjd Jun 5 00:02:37.962: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-5421, will wait for the garbage collector to delete the pods Jun 5 00:02:38.115: INFO: Deleting ReplicationController affinity-clusterip-transition took: 27.460293ms Jun 5 00:02:38.716: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.206545ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:02:43.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5421" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:30.675 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":95,"skipped":1492,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:02:43.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-ntpk STEP: Creating a pod to test atomic-volume-subpath Jun 5 00:02:43.918: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ntpk" in namespace "subpath-8908" to be "Succeeded or Failed" Jun 5 00:02:43.921: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.349813ms Jun 5 00:02:45.932: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014752833s Jun 5 00:02:47.935: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 4.017877761s Jun 5 00:02:49.939: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 6.021558748s Jun 5 00:02:51.944: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 8.025926964s Jun 5 00:02:53.947: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 10.029761988s Jun 5 00:02:55.952: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 12.034167889s Jun 5 00:02:57.956: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 14.038477578s Jun 5 00:02:59.961: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 16.042967958s Jun 5 00:03:01.965: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 18.047816468s Jun 5 00:03:03.969: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 20.051430527s Jun 5 00:03:05.973: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Running", Reason="", readiness=true. Elapsed: 22.055861824s Jun 5 00:03:07.978: INFO: Pod "pod-subpath-test-downwardapi-ntpk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060466784s STEP: Saw pod success Jun 5 00:03:07.978: INFO: Pod "pod-subpath-test-downwardapi-ntpk" satisfied condition "Succeeded or Failed" Jun 5 00:03:07.982: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-ntpk container test-container-subpath-downwardapi-ntpk: STEP: delete the pod Jun 5 00:03:08.029: INFO: Waiting for pod pod-subpath-test-downwardapi-ntpk to disappear Jun 5 00:03:08.032: INFO: Pod pod-subpath-test-downwardapi-ntpk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ntpk Jun 5 00:03:08.032: INFO: Deleting pod "pod-subpath-test-downwardapi-ntpk" in namespace "subpath-8908" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:03:08.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8908" for this suite. • [SLOW TEST:24.265 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":96,"skipped":1510,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:03:08.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5374 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5374 STEP: creating replication controller externalsvc in namespace services-5374 I0605 00:03:08.195201 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5374, replica count: 2 I0605 00:03:11.245619 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:03:14.245912 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:03:17.246239 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 5 00:03:17.281: INFO: Creating new exec pod Jun 5 00:03:21.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5374 execpodvhg4t -- /bin/sh -x -c nslookup clusterip-service' Jun 5 00:03:21.675: INFO: stderr: "I0605 00:03:21.477458 1068 log.go:172] (0xc000a67340) (0xc000174aa0) Create stream\nI0605 00:03:21.477523 1068 log.go:172] (0xc000a67340) (0xc000174aa0) Stream added, broadcasting: 1\nI0605 00:03:21.479755 1068 log.go:172] (0xc000a67340) Reply frame received for 1\nI0605 00:03:21.479786 1068 log.go:172] (0xc000a67340) (0xc000268280) Create stream\nI0605 00:03:21.479794 1068 log.go:172] (0xc000a67340) (0xc000268280) Stream added, broadcasting: 3\nI0605 00:03:21.480839 1068 log.go:172] (0xc000a67340) Reply frame received for 3\nI0605 00:03:21.480881 1068 log.go:172] (0xc000a67340) (0xc000492000) Create stream\nI0605 00:03:21.480897 1068 log.go:172] (0xc000a67340) (0xc000492000) Stream added, broadcasting: 5\nI0605 00:03:21.482304 1068 log.go:172] (0xc000a67340) Reply frame received for 5\nI0605 00:03:21.566569 1068 log.go:172] (0xc000a67340) Data frame received for 5\nI0605 00:03:21.566595 1068 log.go:172] (0xc000492000) (5) Data frame handling\nI0605 00:03:21.566611 1068 log.go:172] (0xc000492000) (5) Data frame sent\n+ nslookup clusterip-service\nI0605 00:03:21.666597 1068 log.go:172] (0xc000a67340) Data frame received for 3\nI0605 00:03:21.666638 1068 log.go:172] (0xc000268280) (3) Data frame handling\nI0605 00:03:21.666664 1068 log.go:172] (0xc000268280) (3) Data frame sent\nI0605 00:03:21.667238 1068 log.go:172] (0xc000a67340) Data frame received for 3\nI0605 00:03:21.667264 1068 log.go:172] (0xc000268280) (3) Data frame handling\nI0605 00:03:21.667287 1068 log.go:172] (0xc000268280) (3) Data frame sent\nI0605 00:03:21.667763 1068 log.go:172] (0xc000a67340) Data frame received for 3\nI0605 00:03:21.667794 1068 log.go:172] (0xc000268280) (3) Data frame handling\nI0605 00:03:21.668035 1068 log.go:172] (0xc000a67340) Data frame received for 5\nI0605 00:03:21.668066 1068 log.go:172] (0xc000492000) (5) Data frame handling\nI0605 00:03:21.670111 1068 log.go:172] (0xc000a67340) Data frame received for 1\nI0605 00:03:21.670130 1068 log.go:172] (0xc000174aa0) (1) Data frame handling\nI0605 00:03:21.670142 1068 log.go:172] (0xc000174aa0) (1) Data frame sent\nI0605 00:03:21.670155 1068 log.go:172] (0xc000a67340) (0xc000174aa0) Stream removed, broadcasting: 1\nI0605 00:03:21.670193 1068 log.go:172] (0xc000a67340) Go away received\nI0605 00:03:21.670426 1068 log.go:172] (0xc000a67340) (0xc000174aa0) Stream removed, broadcasting: 1\nI0605 00:03:21.670450 1068 log.go:172] (0xc000a67340) (0xc000268280) Stream removed, broadcasting: 3\nI0605 00:03:21.670461 1068 log.go:172] (0xc000a67340) (0xc000492000) Stream removed, broadcasting: 5\n" Jun 5 00:03:21.675: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5374.svc.cluster.local\tcanonical name = externalsvc.services-5374.svc.cluster.local.\nName:\texternalsvc.services-5374.svc.cluster.local\nAddress: 10.104.138.50\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5374, will wait for the garbage collector to delete the pods Jun 5 00:03:21.775: INFO: Deleting ReplicationController externalsvc took: 5.824289ms Jun 5 00:03:22.175: INFO: Terminating ReplicationController externalsvc pods took: 400.246973ms Jun 5 00:03:35.444: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:03:35.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5374" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.484 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":97,"skipped":1522,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:03:35.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 5 00:03:35.625: INFO: Waiting up to 5m0s for pod "pod-927c49d7-2919-486b-93ad-3ec765a0fac8" in namespace "emptydir-6909" to be "Succeeded or Failed" Jun 5 00:03:35.644: INFO: Pod "pod-927c49d7-2919-486b-93ad-3ec765a0fac8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.238609ms Jun 5 00:03:37.648: INFO: Pod "pod-927c49d7-2919-486b-93ad-3ec765a0fac8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023895465s Jun 5 00:03:39.654: INFO: Pod "pod-927c49d7-2919-486b-93ad-3ec765a0fac8": Phase="Running", Reason="", readiness=true. Elapsed: 4.029549754s Jun 5 00:03:41.658: INFO: Pod "pod-927c49d7-2919-486b-93ad-3ec765a0fac8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033505216s STEP: Saw pod success Jun 5 00:03:41.658: INFO: Pod "pod-927c49d7-2919-486b-93ad-3ec765a0fac8" satisfied condition "Succeeded or Failed" Jun 5 00:03:41.662: INFO: Trying to get logs from node latest-worker pod pod-927c49d7-2919-486b-93ad-3ec765a0fac8 container test-container: STEP: delete the pod Jun 5 00:03:41.720: INFO: Waiting for pod pod-927c49d7-2919-486b-93ad-3ec765a0fac8 to disappear Jun 5 00:03:41.734: INFO: Pod pod-927c49d7-2919-486b-93ad-3ec765a0fac8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:03:41.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6909" for this suite. • [SLOW TEST:6.212 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":98,"skipped":1535,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:03:41.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-89854605-388a-4880-adc5-20df04a6fd69 in namespace container-probe-9634 Jun 5 00:03:45.888: INFO: Started pod busybox-89854605-388a-4880-adc5-20df04a6fd69 in namespace container-probe-9634 STEP: checking the pod's current state and verifying that restartCount is present Jun 5 00:03:45.892: INFO: Initial restart count of pod busybox-89854605-388a-4880-adc5-20df04a6fd69 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:07:46.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9634" for this suite. • [SLOW TEST:244.840 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:07:46.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Jun 5 00:07:46.675: INFO: Waiting up to 5m0s for pod "var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555" in namespace "var-expansion-274" to be "Succeeded or Failed" Jun 5 00:07:46.678: INFO: Pod "var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555": Phase="Pending", Reason="", readiness=false. Elapsed: 3.497709ms Jun 5 00:07:48.704: INFO: Pod "var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028900551s Jun 5 00:07:50.707: INFO: Pod "var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032657225s STEP: Saw pod success Jun 5 00:07:50.707: INFO: Pod "var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555" satisfied condition "Succeeded or Failed" Jun 5 00:07:50.710: INFO: Trying to get logs from node latest-worker2 pod var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555 container dapi-container: STEP: delete the pod Jun 5 00:07:50.740: INFO: Waiting for pod var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555 to disappear Jun 5 00:07:50.744: INFO: Pod var-expansion-7f1fd1ce-ecef-4b45-b8ad-fd085c0e5555 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:07:50.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-274" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:07:50.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 5 00:07:50.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:50.935: INFO: Number of nodes with available pods: 0 Jun 5 00:07:50.935: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:07:52.052: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:52.104: INFO: Number of nodes with available pods: 0 Jun 5 00:07:52.104: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:07:53.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:53.103: INFO: Number of nodes with available pods: 0 Jun 5 00:07:53.103: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:07:53.941: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:53.944: INFO: Number of nodes with available pods: 0 Jun 5 00:07:53.944: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:07:54.940: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:54.948: INFO: Number of nodes with available pods: 1 Jun 5 00:07:54.948: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:07:55.955: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:55.958: INFO: Number of nodes with available pods: 2 Jun 5 00:07:55.958: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 5 00:07:55.994: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:55.998: INFO: Number of nodes with available pods: 1 Jun 5 00:07:55.998: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:07:57.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:57.049: INFO: Number of nodes with available pods: 1 Jun 5 00:07:57.049: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:07:58.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:58.008: INFO: Number of nodes with available pods: 1 Jun 5 00:07:58.008: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:07:59.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:07:59.008: INFO: Number of nodes with available pods: 1 Jun 5 00:07:59.008: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:00.003: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:00.006: INFO: Number of nodes with available pods: 1 Jun 5 00:08:00.006: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:01.003: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:01.007: INFO: Number of nodes with available pods: 1 Jun 5 00:08:01.007: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:02.009: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:02.014: INFO: Number of nodes with available pods: 1 Jun 5 00:08:02.014: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:03.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:03.008: INFO: Number of nodes with available pods: 1 Jun 5 00:08:03.008: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:04.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:04.010: INFO: Number of nodes with available pods: 1 Jun 5 00:08:04.010: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:05.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:05.039: INFO: Number of nodes with available pods: 1 Jun 5 00:08:05.039: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:06.003: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:06.007: INFO: Number of nodes with available pods: 1 Jun 5 00:08:06.007: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:07.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:07.008: INFO: Number of nodes with available pods: 1 Jun 5 00:08:07.008: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:08.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:08.008: INFO: Number of nodes with available pods: 1 Jun 5 00:08:08.008: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:08:09.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:08:09.008: INFO: Number of nodes with available pods: 2 Jun 5 00:08:09.008: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2882, will wait for the garbage collector to delete the pods Jun 5 00:08:09.071: INFO: Deleting DaemonSet.extensions daemon-set took: 6.847691ms Jun 5 00:08:09.371: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.310614ms Jun 5 00:08:14.901: INFO: Number of nodes with available pods: 0 Jun 5 00:08:14.901: INFO: Number of running nodes: 0, number of available pods: 0 Jun 5 00:08:14.906: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2882/daemonsets","resourceVersion":"10332662"},"items":null} Jun 5 00:08:14.908: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2882/pods","resourceVersion":"10332662"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:08:14.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2882" for this suite. • [SLOW TEST:24.171 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":101,"skipped":1681,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:08:14.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6512 STEP: creating service affinity-clusterip in namespace services-6512 STEP: creating replication controller affinity-clusterip in namespace services-6512 I0605 00:08:15.095015 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-6512, replica count: 3 I0605 00:08:18.145475 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:08:21.145796 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:08:21.152: INFO: Creating new exec pod Jun 5 00:08:26.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6512 execpod-affinity7lj94 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jun 5 00:08:26.413: INFO: stderr: "I0605 00:08:26.304639 1088 log.go:172] (0xc00099d290) (0xc00071cb40) Create stream\nI0605 00:08:26.304690 1088 log.go:172] (0xc00099d290) (0xc00071cb40) Stream added, broadcasting: 1\nI0605 00:08:26.307517 1088 log.go:172] (0xc00099d290) Reply frame received for 1\nI0605 00:08:26.307562 1088 log.go:172] (0xc00099d290) (0xc0007326e0) Create stream\nI0605 00:08:26.307580 1088 log.go:172] (0xc00099d290) (0xc0007326e0) Stream added, broadcasting: 3\nI0605 00:08:26.310091 1088 log.go:172] (0xc00099d290) Reply frame received for 3\nI0605 00:08:26.310515 1088 log.go:172] (0xc00099d290) (0xc000733040) Create stream\nI0605 00:08:26.310539 1088 log.go:172] (0xc00099d290) (0xc000733040) Stream added, broadcasting: 5\nI0605 00:08:26.311497 1088 log.go:172] (0xc00099d290) Reply frame received for 5\nI0605 00:08:26.398308 1088 log.go:172] (0xc00099d290) Data frame received for 5\nI0605 00:08:26.398346 1088 log.go:172] (0xc000733040) (5) Data frame handling\nI0605 00:08:26.398378 1088 log.go:172] (0xc000733040) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0605 00:08:26.405602 1088 log.go:172] (0xc00099d290) Data frame received for 5\nI0605 00:08:26.405637 1088 log.go:172] (0xc000733040) (5) Data frame handling\nI0605 00:08:26.405659 1088 log.go:172] (0xc000733040) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0605 00:08:26.405985 1088 log.go:172] (0xc00099d290) Data frame received for 5\nI0605 00:08:26.406012 1088 log.go:172] (0xc00099d290) Data frame received for 3\nI0605 00:08:26.406050 1088 log.go:172] (0xc0007326e0) (3) Data frame handling\nI0605 00:08:26.406077 1088 log.go:172] (0xc000733040) (5) Data frame handling\nI0605 00:08:26.408047 1088 log.go:172] (0xc00099d290) Data frame received for 1\nI0605 00:08:26.408069 1088 log.go:172] (0xc00071cb40) (1) Data frame handling\nI0605 00:08:26.408081 1088 log.go:172] (0xc00071cb40) (1) Data frame sent\nI0605 00:08:26.408095 1088 log.go:172] (0xc00099d290) (0xc00071cb40) Stream removed, broadcasting: 1\nI0605 00:08:26.408112 1088 log.go:172] (0xc00099d290) Go away received\nI0605 00:08:26.408591 1088 log.go:172] (0xc00099d290) (0xc00071cb40) Stream removed, broadcasting: 1\nI0605 00:08:26.408632 1088 log.go:172] (0xc00099d290) (0xc0007326e0) Stream removed, broadcasting: 3\nI0605 00:08:26.408645 1088 log.go:172] (0xc00099d290) (0xc000733040) Stream removed, broadcasting: 5\n" Jun 5 00:08:26.413: INFO: stdout: "" Jun 5 00:08:26.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6512 execpod-affinity7lj94 -- /bin/sh -x -c nc -zv -t -w 2 10.110.103.93 80' Jun 5 00:08:26.648: INFO: stderr: "I0605 00:08:26.567356 1109 log.go:172] (0xc00003a420) (0xc000610c80) Create stream\nI0605 00:08:26.567440 1109 log.go:172] (0xc00003a420) (0xc000610c80) Stream added, broadcasting: 1\nI0605 00:08:26.570035 1109 log.go:172] (0xc00003a420) Reply frame received for 1\nI0605 00:08:26.570067 1109 log.go:172] (0xc00003a420) (0xc0005c0500) Create stream\nI0605 00:08:26.570078 1109 log.go:172] (0xc00003a420) (0xc0005c0500) Stream added, broadcasting: 3\nI0605 00:08:26.571032 1109 log.go:172] (0xc00003a420) Reply frame received for 3\nI0605 00:08:26.571054 1109 log.go:172] (0xc00003a420) (0xc0005c0dc0) Create stream\nI0605 00:08:26.571063 1109 log.go:172] (0xc00003a420) (0xc0005c0dc0) Stream added, broadcasting: 5\nI0605 00:08:26.571968 1109 log.go:172] (0xc00003a420) Reply frame received for 5\nI0605 00:08:26.638865 1109 log.go:172] (0xc00003a420) Data frame received for 3\nI0605 00:08:26.638904 1109 log.go:172] (0xc0005c0500) (3) Data frame handling\nI0605 00:08:26.638953 1109 log.go:172] (0xc00003a420) Data frame received for 5\nI0605 00:08:26.638988 1109 log.go:172] (0xc0005c0dc0) (5) Data frame handling\nI0605 00:08:26.639006 1109 log.go:172] (0xc0005c0dc0) (5) Data frame sent\nI0605 00:08:26.639035 1109 log.go:172] (0xc00003a420) Data frame received for 5\nI0605 00:08:26.639055 1109 log.go:172] (0xc0005c0dc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.103.93 80\nConnection to 10.110.103.93 80 port [tcp/http] succeeded!\nI0605 00:08:26.640645 1109 log.go:172] (0xc00003a420) Data frame received for 1\nI0605 00:08:26.640682 1109 log.go:172] (0xc000610c80) (1) Data frame handling\nI0605 00:08:26.640697 1109 log.go:172] (0xc000610c80) (1) Data frame sent\nI0605 00:08:26.640716 1109 log.go:172] (0xc00003a420) (0xc000610c80) Stream removed, broadcasting: 1\nI0605 00:08:26.640851 1109 log.go:172] (0xc00003a420) Go away received\nI0605 00:08:26.641039 1109 log.go:172] (0xc00003a420) (0xc000610c80) Stream removed, broadcasting: 1\nI0605 00:08:26.641061 1109 log.go:172] (0xc00003a420) (0xc0005c0500) Stream removed, broadcasting: 3\nI0605 00:08:26.641072 1109 log.go:172] (0xc00003a420) (0xc0005c0dc0) Stream removed, broadcasting: 5\n" Jun 5 00:08:26.648: INFO: stdout: "" Jun 5 00:08:26.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6512 execpod-affinity7lj94 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.103.93:80/ ; done' Jun 5 00:08:26.999: INFO: stderr: "I0605 00:08:26.824124 1130 log.go:172] (0xc000a956b0) (0xc000b5a5a0) Create stream\nI0605 00:08:26.824175 1130 log.go:172] (0xc000a956b0) (0xc000b5a5a0) Stream added, broadcasting: 1\nI0605 00:08:26.828773 1130 log.go:172] (0xc000a956b0) Reply frame received for 1\nI0605 00:08:26.828840 1130 log.go:172] (0xc000a956b0) (0xc0006ee6e0) Create stream\nI0605 00:08:26.828859 1130 log.go:172] (0xc000a956b0) (0xc0006ee6e0) Stream added, broadcasting: 3\nI0605 00:08:26.830230 1130 log.go:172] (0xc000a956b0) Reply frame received for 3\nI0605 00:08:26.830282 1130 log.go:172] (0xc000a956b0) (0xc000472e60) Create stream\nI0605 00:08:26.830299 1130 log.go:172] (0xc000a956b0) (0xc000472e60) Stream added, broadcasting: 5\nI0605 00:08:26.831201 1130 log.go:172] (0xc000a956b0) Reply frame received for 5\nI0605 00:08:26.902971 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.903033 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.903059 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.903112 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.903138 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.903161 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.910102 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.910129 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.910148 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.910788 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.910820 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.910833 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.910853 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.910866 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.910873 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.918803 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.918823 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.918839 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.919881 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.919914 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.919926 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.919940 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.919951 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.919966 1130 log.go:172] (0xc000472e60) (5) Data frame sent\nI0605 00:08:26.919977 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.919987 1130 log.go:172] (0xc000472e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.920008 1130 log.go:172] (0xc000472e60) (5) Data frame sent\nI0605 00:08:26.924597 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.924627 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.924646 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.926998 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.927031 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.927042 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.927066 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.927095 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.927119 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.930079 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.930113 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.930151 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.930753 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.930791 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.930815 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.930852 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.930873 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.930900 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.934840 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.934874 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.934894 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.935286 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.935298 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.935305 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.935475 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.935503 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.935526 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.941348 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.941374 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.941396 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.941883 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.941899 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.941907 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.941923 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.941931 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.941938 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.945350 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.945365 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.945373 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.945649 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.945670 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.945679 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.945690 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.945696 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.945702 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.949571 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.949601 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.949622 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.950187 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.950199 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.950208 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.950219 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.950227 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.950238 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.953935 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.953963 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.953985 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.954286 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.954314 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.954331 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.954349 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.954360 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.954370 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.959287 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.959314 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.959339 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.959815 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.959835 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.959844 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.959857 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.959865 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.959873 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.963478 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.963508 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.963537 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.963933 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.963953 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.963977 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.963994 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.964006 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.964029 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.969443 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.969466 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.969484 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.969995 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.970013 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.970027 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.970043 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.970053 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.970061 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.974427 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.974453 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.974494 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.974781 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.974796 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.974806 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.974824 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.974832 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.974839 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.978687 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.978698 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.978704 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.979133 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.979143 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.979148 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.979297 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.979325 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.979353 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.983780 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.983797 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.983805 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.984837 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.984860 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.984876 1130 log.go:172] (0xc000472e60) (5) Data frame sent\n+ I0605 00:08:26.985476 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.985494 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.985510 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.985543 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.985556 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.985568 1130 log.go:172] (0xc000472e60) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://10.110.103.93:80/\nI0605 00:08:26.991581 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.991594 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.991605 1130 log.go:172] (0xc0006ee6e0) (3) Data frame sent\nI0605 00:08:26.992445 1130 log.go:172] (0xc000a956b0) Data frame received for 3\nI0605 00:08:26.992467 1130 log.go:172] (0xc0006ee6e0) (3) Data frame handling\nI0605 00:08:26.992628 1130 log.go:172] (0xc000a956b0) Data frame received for 5\nI0605 00:08:26.992651 1130 log.go:172] (0xc000472e60) (5) Data frame handling\nI0605 00:08:26.994213 1130 log.go:172] (0xc000a956b0) Data frame received for 1\nI0605 00:08:26.994242 1130 log.go:172] (0xc000b5a5a0) (1) Data frame handling\nI0605 00:08:26.994256 1130 log.go:172] (0xc000b5a5a0) (1) Data frame sent\nI0605 00:08:26.994274 1130 log.go:172] (0xc000a956b0) (0xc000b5a5a0) Stream removed, broadcasting: 1\nI0605 00:08:26.994290 1130 log.go:172] (0xc000a956b0) Go away received\nI0605 00:08:26.994544 1130 log.go:172] (0xc000a956b0) (0xc000b5a5a0) Stream removed, broadcasting: 1\nI0605 00:08:26.994561 1130 log.go:172] (0xc000a956b0) (0xc0006ee6e0) Stream removed, broadcasting: 3\nI0605 00:08:26.994573 1130 log.go:172] (0xc000a956b0) (0xc000472e60) Stream removed, broadcasting: 5\n" Jun 5 00:08:27.000: INFO: stdout: "\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr\naffinity-clusterip-p52qr" Jun 5 00:08:27.000: INFO: Received response from host: Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Received response from host: affinity-clusterip-p52qr Jun 5 00:08:27.000: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-6512, will wait for the garbage collector to delete the pods Jun 5 00:08:27.221: INFO: Deleting ReplicationController affinity-clusterip took: 102.031601ms Jun 5 00:08:27.622: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.248957ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:08:35.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6512" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.442 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":102,"skipped":1702,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:08:35.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-2bfa8556-f29a-4863-ad4a-27ba0602ba7b STEP: Creating a pod to test consume configMaps Jun 5 00:08:35.423: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545" in namespace "projected-8971" to be "Succeeded or Failed" Jun 5 00:08:35.488: INFO: Pod "pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545": Phase="Pending", Reason="", readiness=false. Elapsed: 64.935691ms Jun 5 00:08:37.757: INFO: Pod "pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334373511s Jun 5 00:08:39.762: INFO: Pod "pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.338702204s STEP: Saw pod success Jun 5 00:08:39.762: INFO: Pod "pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545" satisfied condition "Succeeded or Failed" Jun 5 00:08:39.764: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545 container projected-configmap-volume-test: STEP: delete the pod Jun 5 00:08:39.815: INFO: Waiting for pod pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545 to disappear Jun 5 00:08:39.819: INFO: Pod pod-projected-configmaps-c3361328-f0a3-45bd-bf45-48fe747c6545 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:08:39.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8971" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1714,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:08:39.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:08:39.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38" in namespace "downward-api-3204" to be "Succeeded or Failed" Jun 5 00:08:39.931: INFO: Pod "downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38": Phase="Pending", Reason="", readiness=false. Elapsed: 10.367525ms Jun 5 00:08:41.954: INFO: Pod "downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033431721s Jun 5 00:08:43.958: INFO: Pod "downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037427304s STEP: Saw pod success Jun 5 00:08:43.959: INFO: Pod "downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38" satisfied condition "Succeeded or Failed" Jun 5 00:08:43.961: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38 container client-container: STEP: delete the pod Jun 5 00:08:44.022: INFO: Waiting for pod downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38 to disappear Jun 5 00:08:44.027: INFO: Pod downwardapi-volume-edb9a409-73f6-4cbc-adad-92e294423f38 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:08:44.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3204" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:08:44.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 5 00:08:44.119: INFO: Waiting up to 5m0s for pod "pod-ec328a39-3047-4663-b4aa-63fff69a4d8d" in namespace "emptydir-4317" to be "Succeeded or Failed" Jun 5 00:08:44.147: INFO: Pod "pod-ec328a39-3047-4663-b4aa-63fff69a4d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.140642ms Jun 5 00:08:46.150: INFO: Pod "pod-ec328a39-3047-4663-b4aa-63fff69a4d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031756642s Jun 5 00:08:48.155: INFO: Pod "pod-ec328a39-3047-4663-b4aa-63fff69a4d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035977068s STEP: Saw pod success Jun 5 00:08:48.155: INFO: Pod "pod-ec328a39-3047-4663-b4aa-63fff69a4d8d" satisfied condition "Succeeded or Failed" Jun 5 00:08:48.158: INFO: Trying to get logs from node latest-worker2 pod pod-ec328a39-3047-4663-b4aa-63fff69a4d8d container test-container: STEP: delete the pod Jun 5 00:08:48.335: INFO: Waiting for pod pod-ec328a39-3047-4663-b4aa-63fff69a4d8d to disappear Jun 5 00:08:48.345: INFO: Pod pod-ec328a39-3047-4663-b4aa-63fff69a4d8d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:08:48.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4317" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":105,"skipped":1754,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:08:48.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:08:48.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f" in namespace "projected-8021" to be "Succeeded or Failed" Jun 5 00:08:48.494: INFO: Pod "downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.589952ms Jun 5 00:08:50.498: INFO: Pod "downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008469548s Jun 5 00:08:52.503: INFO: Pod "downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013342758s STEP: Saw pod success Jun 5 00:08:52.503: INFO: Pod "downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f" satisfied condition "Succeeded or Failed" Jun 5 00:08:52.507: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f container client-container: STEP: delete the pod Jun 5 00:08:52.531: INFO: Waiting for pod downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f to disappear Jun 5 00:08:52.536: INFO: Pod downwardapi-volume-b56b890a-ce0a-48bc-b57e-82af8893518f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:08:52.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8021" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":106,"skipped":1764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:08:52.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 5 00:08:52.623: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 5 00:08:52.638: INFO: Waiting for terminating namespaces to be deleted... Jun 5 00:08:52.640: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 5 00:08:52.646: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.646: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 5 00:08:52.646: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.646: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 5 00:08:52.646: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.646: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 00:08:52.646: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.646: INFO: Container kube-proxy ready: true, restart count 0 Jun 5 00:08:52.646: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 5 00:08:52.650: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.650: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 5 00:08:52.650: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.650: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 5 00:08:52.650: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.650: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 00:08:52.650: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 00:08:52.650: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-44b86bf5-7150-4531-8702-71f58a6936bc 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-44b86bf5-7150-4531-8702-71f58a6936bc off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-44b86bf5-7150-4531-8702-71f58a6936bc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:08.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4041" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.360 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":107,"skipped":1805,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:08.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:20.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4133" for this suite. • [SLOW TEST:11.144 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":108,"skipped":1810,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:20.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 5 00:09:20.133: INFO: Waiting up to 5m0s for pod "downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1" in namespace "downward-api-1388" to be "Succeeded or Failed" Jun 5 00:09:20.159: INFO: Pod "downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1": Phase="Pending", Reason="", readiness=false. Elapsed: 25.625976ms Jun 5 00:09:22.162: INFO: Pod "downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028696566s Jun 5 00:09:24.195: INFO: Pod "downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062380722s STEP: Saw pod success Jun 5 00:09:24.195: INFO: Pod "downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1" satisfied condition "Succeeded or Failed" Jun 5 00:09:24.199: INFO: Trying to get logs from node latest-worker pod downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1 container dapi-container: STEP: delete the pod Jun 5 00:09:24.235: INFO: Waiting for pod downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1 to disappear Jun 5 00:09:24.256: INFO: Pod downward-api-73852d75-34ac-43cb-9819-cfbf8352ddf1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:24.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1388" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1819,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:24.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Jun 5 00:09:24.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' Jun 5 00:09:24.466: INFO: stderr: "" Jun 5 00:09:24.466: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:24.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8885" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":110,"skipped":1821,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:24.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:09:24.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' Jun 5 00:09:24.686: INFO: stderr: "" Jun 5 00:09:24.686: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:24.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8187" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":111,"skipped":1831,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:24.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 5 00:09:29.363: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e5256d96-98f7-4281-8b42-ccb1602ca847" Jun 5 00:09:29.363: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e5256d96-98f7-4281-8b42-ccb1602ca847" in namespace "pods-4593" to be "terminated due to deadline exceeded" Jun 5 00:09:29.404: INFO: Pod "pod-update-activedeadlineseconds-e5256d96-98f7-4281-8b42-ccb1602ca847": Phase="Running", Reason="", readiness=true. Elapsed: 40.358114ms Jun 5 00:09:31.408: INFO: Pod "pod-update-activedeadlineseconds-e5256d96-98f7-4281-8b42-ccb1602ca847": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.044851059s Jun 5 00:09:31.408: INFO: Pod "pod-update-activedeadlineseconds-e5256d96-98f7-4281-8b42-ccb1602ca847" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:31.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4593" for this suite. • [SLOW TEST:6.720 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":1838,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:31.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 5 00:09:37.546: INFO: &Pod{ObjectMeta:{send-events-724080b2-4f67-447b-8a29-70e425817620 events-9952 /api/v1/namespaces/events-9952/pods/send-events-724080b2-4f67-447b-8a29-70e425817620 8ef883c8-629f-4e04-9ee7-8ee8fc55f779 10333318 0 2020-06-05 00:09:31 +0000 UTC map[name:foo time:498012807] map[] [] [] [{e2e.test Update v1 2020-06-05 00:09:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 00:09:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qcnlg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qcnlg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qcnlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:09:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:09:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:09:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:09:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.139,StartTime:2020-06-05 00:09:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 00:09:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://cb659735824dfcb50da4d3949bca7a70033b2f4a9eb86e2ec5c9a67036d4870b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 5 00:09:39.551: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 5 00:09:41.556: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:41.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9952" for this suite. • [SLOW TEST:10.198 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":113,"skipped":1860,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:41.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7895" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":1862,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:45.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 5 00:09:51.804: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6946 PodName:pod-sharedvolume-2eec6d5f-adf7-4d9f-ac5a-02151ef8c89f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:09:51.804: INFO: >>> kubeConfig: /root/.kube/config I0605 00:09:51.841372 7 log.go:172] (0xc002f704d0) (0xc0025054a0) Create stream I0605 00:09:51.841426 7 log.go:172] (0xc002f704d0) (0xc0025054a0) Stream added, broadcasting: 1 I0605 00:09:51.843485 7 log.go:172] (0xc002f704d0) Reply frame received for 1 I0605 00:09:51.843529 7 log.go:172] (0xc002f704d0) (0xc001b63180) Create stream I0605 00:09:51.843545 7 log.go:172] (0xc002f704d0) (0xc001b63180) Stream added, broadcasting: 3 I0605 00:09:51.844375 7 log.go:172] (0xc002f704d0) Reply frame received for 3 I0605 00:09:51.844421 7 log.go:172] (0xc002f704d0) (0xc0025055e0) Create stream I0605 00:09:51.844436 7 log.go:172] (0xc002f704d0) (0xc0025055e0) Stream added, broadcasting: 5 I0605 00:09:51.845348 7 log.go:172] (0xc002f704d0) Reply frame received for 5 I0605 00:09:51.930173 7 log.go:172] (0xc002f704d0) Data frame received for 5 I0605 00:09:51.930220 7 log.go:172] (0xc0025055e0) (5) Data frame handling I0605 00:09:51.930256 7 log.go:172] (0xc002f704d0) Data frame received for 3 I0605 00:09:51.930271 7 log.go:172] (0xc001b63180) (3) Data frame handling I0605 00:09:51.930300 7 log.go:172] (0xc001b63180) (3) Data frame sent I0605 00:09:51.930377 7 log.go:172] (0xc002f704d0) Data frame received for 3 I0605 00:09:51.930400 7 log.go:172] (0xc001b63180) (3) Data frame handling I0605 00:09:51.932238 7 log.go:172] (0xc002f704d0) Data frame received for 1 I0605 00:09:51.932271 7 log.go:172] (0xc0025054a0) (1) Data frame handling I0605 00:09:51.932315 7 log.go:172] (0xc0025054a0) (1) Data frame sent I0605 00:09:51.932340 7 log.go:172] (0xc002f704d0) (0xc0025054a0) Stream removed, broadcasting: 1 I0605 00:09:51.932394 7 log.go:172] (0xc002f704d0) Go away received I0605 00:09:51.932435 7 log.go:172] (0xc002f704d0) (0xc0025054a0) Stream removed, broadcasting: 1 I0605 00:09:51.932455 7 log.go:172] (0xc002f704d0) (0xc001b63180) Stream removed, broadcasting: 3 I0605 00:09:51.932477 7 log.go:172] (0xc002f704d0) (0xc0025055e0) Stream removed, broadcasting: 5 Jun 5 00:09:51.932: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:51.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6946" for this suite. • [SLOW TEST:6.219 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":115,"skipped":1864,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:51.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Jun 5 00:09:52.008: INFO: Waiting up to 5m0s for pod "client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1" in namespace "containers-8556" to be "Succeeded or Failed" Jun 5 00:09:52.070: INFO: Pod "client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1": Phase="Pending", Reason="", readiness=false. Elapsed: 62.387791ms Jun 5 00:09:54.075: INFO: Pod "client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067061291s Jun 5 00:09:56.077: INFO: Pod "client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069827584s STEP: Saw pod success Jun 5 00:09:56.077: INFO: Pod "client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1" satisfied condition "Succeeded or Failed" Jun 5 00:09:56.079: INFO: Trying to get logs from node latest-worker2 pod client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1 container test-container: STEP: delete the pod Jun 5 00:09:56.120: INFO: Waiting for pod client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1 to disappear Jun 5 00:09:56.158: INFO: Pod client-containers-891111c9-dfce-45ff-beb4-ecc82a4730a1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:56.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8556" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":116,"skipped":1865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:56.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:09:56.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6459" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":117,"skipped":1889,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:09:56.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:00.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8512" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":1895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:00.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:10:00.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc" in namespace "projected-7983" to be "Succeeded or Failed" Jun 5 00:10:00.593: INFO: Pod "downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.759852ms Jun 5 00:10:02.597: INFO: Pod "downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008281474s Jun 5 00:10:04.601: INFO: Pod "downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012192339s STEP: Saw pod success Jun 5 00:10:04.601: INFO: Pod "downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc" satisfied condition "Succeeded or Failed" Jun 5 00:10:04.604: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc container client-container: STEP: delete the pod Jun 5 00:10:04.625: INFO: Waiting for pod downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc to disappear Jun 5 00:10:04.692: INFO: Pod downwardapi-volume-8d19b4aa-3ca8-4fcd-9bf7-b19582a745fc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:04.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7983" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":119,"skipped":1920,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:04.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 5 00:10:09.293: INFO: Successfully updated pod "pod-update-27c53c48-156d-4784-8b0e-c4f427bc33ea" STEP: verifying the updated pod is in kubernetes Jun 5 00:10:09.318: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:09.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6697" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1931,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:09.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:10:09.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc" in namespace "projected-1678" to be "Succeeded or Failed" Jun 5 00:10:09.420: INFO: Pod "downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.962281ms Jun 5 00:10:11.445: INFO: Pod "downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034934375s Jun 5 00:10:13.449: INFO: Pod "downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03905244s STEP: Saw pod success Jun 5 00:10:13.449: INFO: Pod "downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc" satisfied condition "Succeeded or Failed" Jun 5 00:10:13.452: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc container client-container: STEP: delete the pod Jun 5 00:10:13.610: INFO: Waiting for pod downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc to disappear Jun 5 00:10:13.666: INFO: Pod downwardapi-volume-37c214e2-8420-4f02-b3d4-e398470d0ccc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:13.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1678" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":1934,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:13.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:31.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2016" for this suite. • [SLOW TEST:18.099 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":122,"skipped":1949,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:31.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Jun 5 00:10:31.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' Jun 5 00:10:32.315: INFO: stderr: "" Jun 5 00:10:32.315: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:32.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2759" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":123,"skipped":1963,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:32.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2045 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2045 STEP: creating replication controller externalsvc in namespace services-2045 I0605 00:10:32.497602 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2045, replica count: 2 I0605 00:10:35.548050 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:10:38.548275 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 5 00:10:38.620: INFO: Creating new exec pod Jun 5 00:10:42.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2045 execpodxpvfn -- /bin/sh -x -c nslookup nodeport-service' Jun 5 00:10:43.072: INFO: stderr: "I0605 00:10:42.841075 1210 log.go:172] (0xc000b1c420) (0xc00054cc80) Create stream\nI0605 00:10:42.841286 1210 log.go:172] (0xc000b1c420) (0xc00054cc80) Stream added, broadcasting: 1\nI0605 00:10:42.843626 1210 log.go:172] (0xc000b1c420) Reply frame received for 1\nI0605 00:10:42.843661 1210 log.go:172] (0xc000b1c420) (0xc00013dea0) Create stream\nI0605 00:10:42.843671 1210 log.go:172] (0xc000b1c420) (0xc00013dea0) Stream added, broadcasting: 3\nI0605 00:10:42.844441 1210 log.go:172] (0xc000b1c420) Reply frame received for 3\nI0605 00:10:42.844484 1210 log.go:172] (0xc000b1c420) (0xc00012e0a0) Create stream\nI0605 00:10:42.844495 1210 log.go:172] (0xc000b1c420) (0xc00012e0a0) Stream added, broadcasting: 5\nI0605 00:10:42.845412 1210 log.go:172] (0xc000b1c420) Reply frame received for 5\nI0605 00:10:42.929548 1210 log.go:172] (0xc000b1c420) Data frame received for 5\nI0605 00:10:42.929606 1210 log.go:172] (0xc00012e0a0) (5) Data frame handling\nI0605 00:10:42.929636 1210 log.go:172] (0xc00012e0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0605 00:10:43.063232 1210 log.go:172] (0xc000b1c420) Data frame received for 3\nI0605 00:10:43.063276 1210 log.go:172] (0xc00013dea0) (3) Data frame handling\nI0605 00:10:43.063311 1210 log.go:172] (0xc00013dea0) (3) Data frame sent\nI0605 00:10:43.064339 1210 log.go:172] (0xc000b1c420) Data frame received for 3\nI0605 00:10:43.064370 1210 log.go:172] (0xc00013dea0) (3) Data frame handling\nI0605 00:10:43.064393 1210 log.go:172] (0xc00013dea0) (3) Data frame sent\nI0605 00:10:43.065097 1210 log.go:172] (0xc000b1c420) Data frame received for 3\nI0605 00:10:43.065281 1210 log.go:172] (0xc00013dea0) (3) Data frame handling\nI0605 00:10:43.065341 1210 log.go:172] (0xc000b1c420) Data frame received for 5\nI0605 00:10:43.065387 1210 log.go:172] (0xc00012e0a0) (5) Data frame handling\nI0605 00:10:43.067225 1210 log.go:172] (0xc000b1c420) Data frame received for 1\nI0605 00:10:43.067255 1210 log.go:172] (0xc00054cc80) (1) Data frame handling\nI0605 00:10:43.067286 1210 log.go:172] (0xc00054cc80) (1) Data frame sent\nI0605 00:10:43.067306 1210 log.go:172] (0xc000b1c420) (0xc00054cc80) Stream removed, broadcasting: 1\nI0605 00:10:43.067555 1210 log.go:172] (0xc000b1c420) Go away received\nI0605 00:10:43.067710 1210 log.go:172] (0xc000b1c420) (0xc00054cc80) Stream removed, broadcasting: 1\nI0605 00:10:43.067730 1210 log.go:172] (0xc000b1c420) (0xc00013dea0) Stream removed, broadcasting: 3\nI0605 00:10:43.067744 1210 log.go:172] (0xc000b1c420) (0xc00012e0a0) Stream removed, broadcasting: 5\n" Jun 5 00:10:43.073: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2045.svc.cluster.local\tcanonical name = externalsvc.services-2045.svc.cluster.local.\nName:\texternalsvc.services-2045.svc.cluster.local\nAddress: 10.107.251.111\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2045, will wait for the garbage collector to delete the pods Jun 5 00:10:43.134: INFO: Deleting ReplicationController externalsvc took: 7.637628ms Jun 5 00:10:43.434: INFO: Terminating ReplicationController externalsvc pods took: 300.336377ms Jun 5 00:10:55.418: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:55.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2045" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.155 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":124,"skipped":1984,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:55.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 5 00:10:55.558: INFO: Waiting up to 5m0s for pod "downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8" in namespace "downward-api-4639" to be "Succeeded or Failed" Jun 5 00:10:55.578: INFO: Pod "downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.865129ms Jun 5 00:10:57.582: INFO: Pod "downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024262834s Jun 5 00:10:59.586: INFO: Pod "downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028851099s STEP: Saw pod success Jun 5 00:10:59.586: INFO: Pod "downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8" satisfied condition "Succeeded or Failed" Jun 5 00:10:59.589: INFO: Trying to get logs from node latest-worker pod downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8 container dapi-container: STEP: delete the pod Jun 5 00:10:59.635: INFO: Waiting for pod downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8 to disappear Jun 5 00:10:59.662: INFO: Pod downward-api-25f28a65-e3b8-412a-b628-cf00b30668a8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:10:59.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4639" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":125,"skipped":1987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:10:59.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0605 00:11:40.649300 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 5 00:11:40.649: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:11:40.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5788" for this suite. • [SLOW TEST:40.988 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":126,"skipped":2020,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:11:40.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-19782c05-077d-44f4-bd01-be8d126a36fc STEP: Creating a pod to test consume secrets Jun 5 00:11:40.797: INFO: Waiting up to 5m0s for pod "pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d" in namespace "secrets-4083" to be "Succeeded or Failed" Jun 5 00:11:40.817: INFO: Pod "pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.169234ms Jun 5 00:11:42.879: INFO: Pod "pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081595067s Jun 5 00:11:44.883: INFO: Pod "pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085850282s STEP: Saw pod success Jun 5 00:11:44.883: INFO: Pod "pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d" satisfied condition "Succeeded or Failed" Jun 5 00:11:44.886: INFO: Trying to get logs from node latest-worker pod pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d container secret-volume-test: STEP: delete the pod Jun 5 00:11:44.917: INFO: Waiting for pod pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d to disappear Jun 5 00:11:44.926: INFO: Pod pod-secrets-a8f75668-1c9e-4801-a6f9-17338cc3339d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:11:44.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4083" for this suite. STEP: Destroying namespace "secret-namespace-7274" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":2028,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:11:45.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-9815/secret-test-6d8e598d-45a9-40f6-aafd-10a0a4cff9c2 STEP: Creating a pod to test consume secrets Jun 5 00:11:45.146: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2" in namespace "secrets-9815" to be "Succeeded or Failed" Jun 5 00:11:45.154: INFO: Pod "pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52155ms Jun 5 00:11:47.358: INFO: Pod "pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212016041s Jun 5 00:11:49.388: INFO: Pod "pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241790968s Jun 5 00:11:51.543: INFO: Pod "pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.397665614s STEP: Saw pod success Jun 5 00:11:51.544: INFO: Pod "pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2" satisfied condition "Succeeded or Failed" Jun 5 00:11:51.547: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2 container env-test: STEP: delete the pod Jun 5 00:11:51.707: INFO: Waiting for pod pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2 to disappear Jun 5 00:11:51.742: INFO: Pod pod-configmaps-3c3037b0-b697-4505-86b7-c794ecaa7fa2 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:11:51.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9815" for this suite. • [SLOW TEST:6.753 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":2029,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:11:51.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-eb71627b-9232-4746-bc06-c586b0faf564 STEP: Creating a pod to test consume secrets Jun 5 00:11:52.371: INFO: Waiting up to 5m0s for pod "pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098" in namespace "secrets-358" to be "Succeeded or Failed" Jun 5 00:11:52.422: INFO: Pod "pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098": Phase="Pending", Reason="", readiness=false. Elapsed: 51.10762ms Jun 5 00:11:54.425: INFO: Pod "pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054578065s Jun 5 00:11:56.442: INFO: Pod "pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071175083s STEP: Saw pod success Jun 5 00:11:56.442: INFO: Pod "pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098" satisfied condition "Succeeded or Failed" Jun 5 00:11:56.445: INFO: Trying to get logs from node latest-worker pod pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098 container secret-volume-test: STEP: delete the pod Jun 5 00:11:56.462: INFO: Waiting for pod pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098 to disappear Jun 5 00:11:56.491: INFO: Pod pod-secrets-e84eda9f-aee5-494e-8d7a-e7dbba773098 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:11:56.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-358" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2034,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:11:56.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-12b4d45a-f8f8-4def-b93e-953584f0d214 STEP: Creating a pod to test consume secrets Jun 5 00:11:56.849: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf" in namespace "projected-3480" to be "Succeeded or Failed" Jun 5 00:11:56.902: INFO: Pod "pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf": Phase="Pending", Reason="", readiness=false. Elapsed: 53.451657ms Jun 5 00:11:58.907: INFO: Pod "pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058316137s Jun 5 00:12:00.911: INFO: Pod "pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062631553s STEP: Saw pod success Jun 5 00:12:00.911: INFO: Pod "pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf" satisfied condition "Succeeded or Failed" Jun 5 00:12:00.915: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf container secret-volume-test: STEP: delete the pod Jun 5 00:12:00.953: INFO: Waiting for pod pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf to disappear Jun 5 00:12:00.957: INFO: Pod pod-projected-secrets-4908ae44-6c4c-47ac-9049-c450c8e4becf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:12:00.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3480" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2044,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:12:00.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Jun 5 00:14:01.585: INFO: Successfully updated pod "var-expansion-d0f152cf-8c8a-4981-8f4d-719a57234a6e" STEP: waiting for pod running STEP: deleting the pod gracefully Jun 5 00:14:03.606: INFO: Deleting pod "var-expansion-d0f152cf-8c8a-4981-8f4d-719a57234a6e" in namespace "var-expansion-6973" Jun 5 00:14:03.612: INFO: Wait up to 5m0s for pod "var-expansion-d0f152cf-8c8a-4981-8f4d-719a57234a6e" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:14:45.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6973" for this suite. • [SLOW TEST:164.667 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":131,"skipped":2045,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:14:45.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-35cd0604-40f6-4d7e-96f4-b79147ea0f4e STEP: Creating a pod to test consume configMaps Jun 5 00:14:45.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b" in namespace "projected-9237" to be "Succeeded or Failed" Jun 5 00:14:45.759: INFO: Pod "pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.805474ms Jun 5 00:14:47.784: INFO: Pod "pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033033324s Jun 5 00:14:49.789: INFO: Pod "pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038042275s STEP: Saw pod success Jun 5 00:14:49.789: INFO: Pod "pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b" satisfied condition "Succeeded or Failed" Jun 5 00:14:49.792: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b container projected-configmap-volume-test: STEP: delete the pod Jun 5 00:14:49.852: INFO: Waiting for pod pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b to disappear Jun 5 00:14:49.891: INFO: Pod pod-projected-configmaps-db84f449-38c4-44c0-86f8-6b600892692b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:14:49.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9237" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2048,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:14:49.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8046.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8046.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:14:56.065: INFO: DNS probes using dns-8046/dns-test-0c6ad631-3ef0-4321-9d57-305257316b6f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:14:56.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8046" for this suite. • [SLOW TEST:6.219 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":133,"skipped":2060,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:14:56.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-35fd6680-33f0-4a8f-83da-eaa711b940ee [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:14:56.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6784" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":134,"skipped":2076,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:14:56.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 5 00:14:56.847: INFO: Waiting up to 5m0s for pod "pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7" in namespace "emptydir-6469" to be "Succeeded or Failed" Jun 5 00:14:56.855: INFO: Pod "pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.690192ms Jun 5 00:14:58.994: INFO: Pod "pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147305868s Jun 5 00:15:00.999: INFO: Pod "pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151466597s Jun 5 00:15:03.003: INFO: Pod "pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155607128s STEP: Saw pod success Jun 5 00:15:03.003: INFO: Pod "pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7" satisfied condition "Succeeded or Failed" Jun 5 00:15:03.005: INFO: Trying to get logs from node latest-worker2 pod pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7 container test-container: STEP: delete the pod Jun 5 00:15:03.032: INFO: Waiting for pod pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7 to disappear Jun 5 00:15:03.044: INFO: Pod pod-d9c8c7ca-a5d1-48f8-983f-08b70829c6d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:15:03.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6469" for this suite. • [SLOW TEST:6.449 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2083,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:15:03.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:15:03.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4" in namespace "projected-8960" to be "Succeeded or Failed" Jun 5 00:15:03.158: INFO: Pod "downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.10588ms Jun 5 00:15:05.186: INFO: Pod "downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030790753s Jun 5 00:15:07.218: INFO: Pod "downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063325975s STEP: Saw pod success Jun 5 00:15:07.218: INFO: Pod "downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4" satisfied condition "Succeeded or Failed" Jun 5 00:15:07.221: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4 container client-container: STEP: delete the pod Jun 5 00:15:07.286: INFO: Waiting for pod downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4 to disappear Jun 5 00:15:07.293: INFO: Pod downwardapi-volume-7dbf44d0-3241-42e3-bd3a-b2ff2d0862d4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:15:07.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8960" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2086,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:15:07.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:15:07.394: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 5 00:15:12.410: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 5 00:15:12.410: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 5 00:15:12.479: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1822 /apis/apps/v1/namespaces/deployment-1822/deployments/test-cleanup-deployment 976e03f2-c036-42c6-a61f-d4b8f214be81 10335220 1 2020-06-05 00:15:12 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-06-05 00:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c7ded8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 5 00:15:12.540: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-1822 /apis/apps/v1/namespaces/deployment-1822/replicasets/test-cleanup-deployment-6688745694 499153be-8502-472a-8ed0-d82c4311b630 10335227 1 2020-06-05 00:15:12 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 976e03f2-c036-42c6-a61f-d4b8f214be81 0xc003713f77 0xc003713f78}] [] [{kube-controller-manager Update apps/v1 2020-06-05 00:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"976e03f2-c036-42c6-a61f-d4b8f214be81\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005360008 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 5 00:15:12.540: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 5 00:15:12.541: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1822 /apis/apps/v1/namespaces/deployment-1822/replicasets/test-cleanup-controller 0b587b05-6c23-4a56-952f-38647bc9da5f 10335221 1 2020-06-05 00:15:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 976e03f2-c036-42c6-a61f-d4b8f214be81 0xc003713e37 0xc003713e38}] [] [{e2e.test Update apps/v1 2020-06-05 00:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-05 00:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"976e03f2-c036-42c6-a61f-d4b8f214be81\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003713ed8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 5 00:15:12.600: INFO: Pod "test-cleanup-controller-rm9f6" is available: &Pod{ObjectMeta:{test-cleanup-controller-rm9f6 test-cleanup-controller- deployment-1822 /api/v1/namespaces/deployment-1822/pods/test-cleanup-controller-rm9f6 82009c85-57ba-4e25-bf53-5063c86beb89 10335211 0 2020-06-05 00:15:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 0b587b05-6c23-4a56-952f-38647bc9da5f 0xc00533a2b7 0xc00533a2b8}] [] [{kube-controller-manager Update v1 2020-06-05 00:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b587b05-6c23-4a56-952f-38647bc9da5f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 00:15:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.158\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-crs8g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-crs8g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-crs8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:15:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:15:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:15:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:15:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.158,StartTime:2020-06-05 00:15:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 00:15:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e990dbb08a079fa7d23f8858736a788dd51eb4e0befebc044c02d217f852b1b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 00:15:12.601: INFO: Pod "test-cleanup-deployment-6688745694-zj77r" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-zj77r test-cleanup-deployment-6688745694- deployment-1822 /api/v1/namespaces/deployment-1822/pods/test-cleanup-deployment-6688745694-zj77r 7d35abee-05f5-4e50-a250-708b63d55e4d 10335228 0 2020-06-05 00:15:12 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 499153be-8502-472a-8ed0-d82c4311b630 0xc00533a477 0xc00533a478}] [] [{kube-controller-manager Update v1 2020-06-05 00:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"499153be-8502-472a-8ed0-d82c4311b630\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-crs8g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-crs8g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-crs8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:15:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:15:12.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1822" for this suite. • [SLOW TEST:5.319 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":137,"skipped":2099,"failed":0} SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:15:12.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Jun 5 00:15:18.717: INFO: Pod pod-hostip-aaca593f-0352-4c69-9634-203402713282 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:15:18.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6819" for this suite. • [SLOW TEST:6.106 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2101,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:15:18.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0605 00:15:19.872061 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 5 00:15:19.872: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:15:19.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5255" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":139,"skipped":2101,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:15:19.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:15:31.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8763" for this suite. • [SLOW TEST:11.459 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":140,"skipped":2109,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:15:31.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:17:31.496: INFO: Deleting pod "var-expansion-a4b830e0-05c6-427d-bda8-a2137c778cc7" in namespace "var-expansion-4111" Jun 5 00:17:31.503: INFO: Wait up to 5m0s for pod "var-expansion-a4b830e0-05c6-427d-bda8-a2137c778cc7" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:17:35.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4111" for this suite. • [SLOW TEST:124.202 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":141,"skipped":2118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:17:35.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 5 00:17:35.639: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:17:43.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3222" for this suite. • [SLOW TEST:8.000 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":142,"skipped":2148,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:17:43.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 5 00:17:50.686: INFO: 3 pods remaining Jun 5 00:17:50.686: INFO: 0 pods has nil DeletionTimestamp Jun 5 00:17:50.686: INFO: Jun 5 00:17:51.777: INFO: 0 pods remaining Jun 5 00:17:51.777: INFO: 0 pods has nil DeletionTimestamp Jun 5 00:17:51.777: INFO: Jun 5 00:17:52.400: INFO: 0 pods remaining Jun 5 00:17:52.400: INFO: 0 pods has nil DeletionTimestamp Jun 5 00:17:52.400: INFO: STEP: Gathering metrics W0605 00:17:53.651826 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 5 00:17:53.651: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:17:53.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5523" for this suite. • [SLOW TEST:10.140 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":143,"skipped":2151,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:17:53.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 5 00:18:02.580: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:02.588: INFO: Pod pod-with-poststart-exec-hook still exists Jun 5 00:18:04.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:04.593: INFO: Pod pod-with-poststart-exec-hook still exists Jun 5 00:18:06.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:06.593: INFO: Pod pod-with-poststart-exec-hook still exists Jun 5 00:18:08.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:08.592: INFO: Pod pod-with-poststart-exec-hook still exists Jun 5 00:18:10.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:10.592: INFO: Pod pod-with-poststart-exec-hook still exists Jun 5 00:18:12.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:12.592: INFO: Pod pod-with-poststart-exec-hook still exists Jun 5 00:18:14.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:14.594: INFO: Pod pod-with-poststart-exec-hook still exists Jun 5 00:18:16.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 5 00:18:16.593: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:18:16.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9449" for this suite. • [SLOW TEST:22.920 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:18:16.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:18:16.668: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 5 00:18:18.740: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:18:19.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1807" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":145,"skipped":2203,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:18:19.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d6736d37-d043-41b7-b8d1-5f0648ab4996 STEP: Creating a pod to test consume secrets Jun 5 00:18:20.591: INFO: Waiting up to 5m0s for pod "pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11" in namespace "secrets-7174" to be "Succeeded or Failed" Jun 5 00:18:20.699: INFO: Pod "pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11": Phase="Pending", Reason="", readiness=false. Elapsed: 107.298143ms Jun 5 00:18:22.703: INFO: Pod "pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111578845s Jun 5 00:18:24.706: INFO: Pod "pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114659132s STEP: Saw pod success Jun 5 00:18:24.706: INFO: Pod "pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11" satisfied condition "Succeeded or Failed" Jun 5 00:18:24.708: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11 container secret-volume-test: STEP: delete the pod Jun 5 00:18:24.731: INFO: Waiting for pod pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11 to disappear Jun 5 00:18:24.735: INFO: Pod pod-secrets-d524dc89-1bf2-4782-8a03-2df7f397eb11 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:18:24.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7174" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2204,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:18:24.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 5 00:18:24.817: INFO: >>> kubeConfig: /root/.kube/config Jun 5 00:18:27.805: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:18:38.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1876" for this suite. • [SLOW TEST:13.851 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":147,"skipped":2206,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:18:38.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 5 00:18:38.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:38.790: INFO: Number of nodes with available pods: 0 Jun 5 00:18:38.790: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:39.796: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:39.800: INFO: Number of nodes with available pods: 0 Jun 5 00:18:39.800: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:40.926: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:40.930: INFO: Number of nodes with available pods: 0 Jun 5 00:18:40.930: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:41.796: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:41.800: INFO: Number of nodes with available pods: 0 Jun 5 00:18:41.800: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:42.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:43.043: INFO: Number of nodes with available pods: 1 Jun 5 00:18:43.043: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:18:43.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:43.922: INFO: Number of nodes with available pods: 2 Jun 5 00:18:43.922: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 5 00:18:43.977: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:44.016: INFO: Number of nodes with available pods: 1 Jun 5 00:18:44.016: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:45.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:45.024: INFO: Number of nodes with available pods: 1 Jun 5 00:18:45.024: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:46.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:46.025: INFO: Number of nodes with available pods: 1 Jun 5 00:18:46.025: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:47.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:47.025: INFO: Number of nodes with available pods: 1 Jun 5 00:18:47.025: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:18:48.022: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:18:48.025: INFO: Number of nodes with available pods: 2 Jun 5 00:18:48.026: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6451, will wait for the garbage collector to delete the pods Jun 5 00:18:48.092: INFO: Deleting DaemonSet.extensions daemon-set took: 6.895785ms Jun 5 00:18:48.492: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.402257ms Jun 5 00:18:55.337: INFO: Number of nodes with available pods: 0 Jun 5 00:18:55.337: INFO: Number of running nodes: 0, number of available pods: 0 Jun 5 00:18:55.341: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6451/daemonsets","resourceVersion":"10336439"},"items":null} Jun 5 00:18:55.344: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6451/pods","resourceVersion":"10336439"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:18:55.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6451" for this suite. • [SLOW TEST:16.799 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":148,"skipped":2207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:18:55.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4909 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4909 I0605 00:18:55.612944 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4909, replica count: 2 I0605 00:18:58.663279 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:19:01.663544 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:19:01.663: INFO: Creating new exec pod Jun 5 00:19:06.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4909 execpodldrnh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 5 00:19:09.388: INFO: stderr: "I0605 00:19:09.269261 1230 log.go:172] (0xc000c060b0) (0xc000502820) Create stream\nI0605 00:19:09.269304 1230 log.go:172] (0xc000c060b0) (0xc000502820) Stream added, broadcasting: 1\nI0605 00:19:09.271567 1230 log.go:172] (0xc000c060b0) Reply frame received for 1\nI0605 00:19:09.271604 1230 log.go:172] (0xc000c060b0) (0xc000178280) Create stream\nI0605 00:19:09.271617 1230 log.go:172] (0xc000c060b0) (0xc000178280) Stream added, broadcasting: 3\nI0605 00:19:09.272439 1230 log.go:172] (0xc000c060b0) Reply frame received for 3\nI0605 00:19:09.272483 1230 log.go:172] (0xc000c060b0) (0xc0004485a0) Create stream\nI0605 00:19:09.272503 1230 log.go:172] (0xc000c060b0) (0xc0004485a0) Stream added, broadcasting: 5\nI0605 00:19:09.273429 1230 log.go:172] (0xc000c060b0) Reply frame received for 5\nI0605 00:19:09.369526 1230 log.go:172] (0xc000c060b0) Data frame received for 5\nI0605 00:19:09.369561 1230 log.go:172] (0xc0004485a0) (5) Data frame handling\nI0605 00:19:09.369582 1230 log.go:172] (0xc0004485a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0605 00:19:09.378175 1230 log.go:172] (0xc000c060b0) Data frame received for 5\nI0605 00:19:09.378202 1230 log.go:172] (0xc0004485a0) (5) Data frame handling\nI0605 00:19:09.378233 1230 log.go:172] (0xc0004485a0) (5) Data frame sent\nI0605 00:19:09.378251 1230 log.go:172] (0xc000c060b0) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0605 00:19:09.378268 1230 log.go:172] (0xc0004485a0) (5) Data frame handling\nI0605 00:19:09.378602 1230 log.go:172] (0xc000c060b0) Data frame received for 3\nI0605 00:19:09.378636 1230 log.go:172] (0xc000178280) (3) Data frame handling\nI0605 00:19:09.380710 1230 log.go:172] (0xc000c060b0) Data frame received for 1\nI0605 00:19:09.380736 1230 log.go:172] (0xc000502820) (1) Data frame handling\nI0605 00:19:09.380753 1230 log.go:172] (0xc000502820) (1) Data frame sent\nI0605 00:19:09.380767 1230 log.go:172] (0xc000c060b0) (0xc000502820) Stream removed, broadcasting: 1\nI0605 00:19:09.380780 1230 log.go:172] (0xc000c060b0) Go away received\nI0605 00:19:09.381551 1230 log.go:172] (0xc000c060b0) (0xc000502820) Stream removed, broadcasting: 1\nI0605 00:19:09.381576 1230 log.go:172] (0xc000c060b0) (0xc000178280) Stream removed, broadcasting: 3\nI0605 00:19:09.381589 1230 log.go:172] (0xc000c060b0) (0xc0004485a0) Stream removed, broadcasting: 5\n" Jun 5 00:19:09.388: INFO: stdout: "" Jun 5 00:19:09.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4909 execpodldrnh -- /bin/sh -x -c nc -zv -t -w 2 10.98.209.55 80' Jun 5 00:19:09.636: INFO: stderr: "I0605 00:19:09.542049 1266 log.go:172] (0xc0009ec790) (0xc0002408c0) Create stream\nI0605 00:19:09.542108 1266 log.go:172] (0xc0009ec790) (0xc0002408c0) Stream added, broadcasting: 1\nI0605 00:19:09.544588 1266 log.go:172] (0xc0009ec790) Reply frame received for 1\nI0605 00:19:09.544643 1266 log.go:172] (0xc0009ec790) (0xc0000dd0e0) Create stream\nI0605 00:19:09.544669 1266 log.go:172] (0xc0009ec790) (0xc0000dd0e0) Stream added, broadcasting: 3\nI0605 00:19:09.545932 1266 log.go:172] (0xc0009ec790) Reply frame received for 3\nI0605 00:19:09.545975 1266 log.go:172] (0xc0009ec790) (0xc000241680) Create stream\nI0605 00:19:09.545997 1266 log.go:172] (0xc0009ec790) (0xc000241680) Stream added, broadcasting: 5\nI0605 00:19:09.547026 1266 log.go:172] (0xc0009ec790) Reply frame received for 5\nI0605 00:19:09.631226 1266 log.go:172] (0xc0009ec790) Data frame received for 5\nI0605 00:19:09.631255 1266 log.go:172] (0xc000241680) (5) Data frame handling\nI0605 00:19:09.631262 1266 log.go:172] (0xc000241680) (5) Data frame sent\n+ nc -zv -t -w 2 10.98.209.55 80\nConnection to 10.98.209.55 80 port [tcp/http] succeeded!\nI0605 00:19:09.631287 1266 log.go:172] (0xc0009ec790) Data frame received for 3\nI0605 00:19:09.631323 1266 log.go:172] (0xc0000dd0e0) (3) Data frame handling\nI0605 00:19:09.631344 1266 log.go:172] (0xc0009ec790) Data frame received for 5\nI0605 00:19:09.631349 1266 log.go:172] (0xc000241680) (5) Data frame handling\nI0605 00:19:09.632322 1266 log.go:172] (0xc0009ec790) Data frame received for 1\nI0605 00:19:09.632338 1266 log.go:172] (0xc0002408c0) (1) Data frame handling\nI0605 00:19:09.632343 1266 log.go:172] (0xc0002408c0) (1) Data frame sent\nI0605 00:19:09.632351 1266 log.go:172] (0xc0009ec790) (0xc0002408c0) Stream removed, broadcasting: 1\nI0605 00:19:09.632379 1266 log.go:172] (0xc0009ec790) Go away received\nI0605 00:19:09.632590 1266 log.go:172] (0xc0009ec790) (0xc0002408c0) Stream removed, broadcasting: 1\nI0605 00:19:09.632608 1266 log.go:172] (0xc0009ec790) (0xc0000dd0e0) Stream removed, broadcasting: 3\nI0605 00:19:09.632613 1266 log.go:172] (0xc0009ec790) (0xc000241680) Stream removed, broadcasting: 5\n" Jun 5 00:19:09.636: INFO: stdout: "" Jun 5 00:19:09.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4909 execpodldrnh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30039' Jun 5 00:19:09.818: INFO: stderr: "I0605 00:19:09.751584 1290 log.go:172] (0xc000a24160) (0xc0003666e0) Create stream\nI0605 00:19:09.751664 1290 log.go:172] (0xc000a24160) (0xc0003666e0) Stream added, broadcasting: 1\nI0605 00:19:09.754687 1290 log.go:172] (0xc000a24160) Reply frame received for 1\nI0605 00:19:09.754711 1290 log.go:172] (0xc000a24160) (0xc000366b40) Create stream\nI0605 00:19:09.754718 1290 log.go:172] (0xc000a24160) (0xc000366b40) Stream added, broadcasting: 3\nI0605 00:19:09.755862 1290 log.go:172] (0xc000a24160) Reply frame received for 3\nI0605 00:19:09.755937 1290 log.go:172] (0xc000a24160) (0xc000664320) Create stream\nI0605 00:19:09.755958 1290 log.go:172] (0xc000a24160) (0xc000664320) Stream added, broadcasting: 5\nI0605 00:19:09.757698 1290 log.go:172] (0xc000a24160) Reply frame received for 5\nI0605 00:19:09.808891 1290 log.go:172] (0xc000a24160) Data frame received for 5\nI0605 00:19:09.808923 1290 log.go:172] (0xc000664320) (5) Data frame handling\nI0605 00:19:09.808946 1290 log.go:172] (0xc000664320) (5) Data frame sent\nI0605 00:19:09.808954 1290 log.go:172] (0xc000a24160) Data frame received for 5\nI0605 00:19:09.808960 1290 log.go:172] (0xc000664320) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30039\nConnection to 172.17.0.13 30039 port [tcp/30039] succeeded!\nI0605 00:19:09.808979 1290 log.go:172] (0xc000664320) (5) Data frame sent\nI0605 00:19:09.809491 1290 log.go:172] (0xc000a24160) Data frame received for 3\nI0605 00:19:09.809507 1290 log.go:172] (0xc000366b40) (3) Data frame handling\nI0605 00:19:09.809752 1290 log.go:172] (0xc000a24160) Data frame received for 5\nI0605 00:19:09.809769 1290 log.go:172] (0xc000664320) (5) Data frame handling\nI0605 00:19:09.811295 1290 log.go:172] (0xc000a24160) Data frame received for 1\nI0605 00:19:09.811314 1290 log.go:172] (0xc0003666e0) (1) Data frame handling\nI0605 00:19:09.811322 1290 log.go:172] (0xc0003666e0) (1) Data frame sent\nI0605 00:19:09.811341 1290 log.go:172] (0xc000a24160) (0xc0003666e0) Stream removed, broadcasting: 1\nI0605 00:19:09.811354 1290 log.go:172] (0xc000a24160) Go away received\nI0605 00:19:09.811866 1290 log.go:172] (0xc000a24160) (0xc0003666e0) Stream removed, broadcasting: 1\nI0605 00:19:09.811896 1290 log.go:172] (0xc000a24160) (0xc000366b40) Stream removed, broadcasting: 3\nI0605 00:19:09.811907 1290 log.go:172] (0xc000a24160) (0xc000664320) Stream removed, broadcasting: 5\n" Jun 5 00:19:09.818: INFO: stdout: "" Jun 5 00:19:09.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4909 execpodldrnh -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30039' Jun 5 00:19:10.052: INFO: stderr: "I0605 00:19:09.986221 1311 log.go:172] (0xc000b02e70) (0xc0006a9680) Create stream\nI0605 00:19:09.986287 1311 log.go:172] (0xc000b02e70) (0xc0006a9680) Stream added, broadcasting: 1\nI0605 00:19:09.991742 1311 log.go:172] (0xc000b02e70) Reply frame received for 1\nI0605 00:19:09.991799 1311 log.go:172] (0xc000b02e70) (0xc000482f00) Create stream\nI0605 00:19:09.991824 1311 log.go:172] (0xc000b02e70) (0xc000482f00) Stream added, broadcasting: 3\nI0605 00:19:09.992682 1311 log.go:172] (0xc000b02e70) Reply frame received for 3\nI0605 00:19:09.992725 1311 log.go:172] (0xc000b02e70) (0xc0000dce60) Create stream\nI0605 00:19:09.992741 1311 log.go:172] (0xc000b02e70) (0xc0000dce60) Stream added, broadcasting: 5\nI0605 00:19:09.993892 1311 log.go:172] (0xc000b02e70) Reply frame received for 5\nI0605 00:19:10.044492 1311 log.go:172] (0xc000b02e70) Data frame received for 5\nI0605 00:19:10.044539 1311 log.go:172] (0xc0000dce60) (5) Data frame handling\nI0605 00:19:10.044555 1311 log.go:172] (0xc0000dce60) (5) Data frame sent\nI0605 00:19:10.044564 1311 log.go:172] (0xc000b02e70) Data frame received for 5\nI0605 00:19:10.044575 1311 log.go:172] (0xc0000dce60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30039\nConnection to 172.17.0.12 30039 port [tcp/30039] succeeded!\nI0605 00:19:10.044612 1311 log.go:172] (0xc000b02e70) Data frame received for 3\nI0605 00:19:10.044632 1311 log.go:172] (0xc000482f00) (3) Data frame handling\nI0605 00:19:10.045985 1311 log.go:172] (0xc000b02e70) Data frame received for 1\nI0605 00:19:10.046021 1311 log.go:172] (0xc0006a9680) (1) Data frame handling\nI0605 00:19:10.046036 1311 log.go:172] (0xc0006a9680) (1) Data frame sent\nI0605 00:19:10.046048 1311 log.go:172] (0xc000b02e70) (0xc0006a9680) Stream removed, broadcasting: 1\nI0605 00:19:10.046071 1311 log.go:172] (0xc000b02e70) Go away received\nI0605 00:19:10.046400 1311 log.go:172] (0xc000b02e70) (0xc0006a9680) Stream removed, broadcasting: 1\nI0605 00:19:10.046417 1311 log.go:172] (0xc000b02e70) (0xc000482f00) Stream removed, broadcasting: 3\nI0605 00:19:10.046424 1311 log.go:172] (0xc000b02e70) (0xc0000dce60) Stream removed, broadcasting: 5\n" Jun 5 00:19:10.052: INFO: stdout: "" Jun 5 00:19:10.052: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:19:10.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4909" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.762 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":149,"skipped":2240,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:19:10.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a Jun 5 00:19:10.257: INFO: Pod name my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a: Found 0 pods out of 1 Jun 5 00:19:15.271: INFO: Pod name my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a: Found 1 pods out of 1 Jun 5 00:19:15.271: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a" are running Jun 5 00:19:15.273: INFO: Pod "my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a-lhrfw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:19:10 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:19:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:19:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:19:10 +0000 UTC Reason: Message:}]) Jun 5 00:19:15.273: INFO: Trying to dial the pod Jun 5 00:19:20.285: INFO: Controller my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a: Got expected result from replica 1 [my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a-lhrfw]: "my-hostname-basic-e3e28df1-fd31-49d9-af3f-cda5b7949a1a-lhrfw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:19:20.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7584" for this suite. • [SLOW TEST:10.137 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":150,"skipped":2242,"failed":0} [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:19:20.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d STEP: updating the pod Jun 5 00:19:28.972: INFO: Successfully updated pod "var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d" STEP: waiting for pod and container restart STEP: Failing liveness probe Jun 5 00:19:29.014: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-2764 PodName:var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:19:29.014: INFO: >>> kubeConfig: /root/.kube/config I0605 00:19:29.047364 7 log.go:172] (0xc002f70790) (0xc00208ce60) Create stream I0605 00:19:29.047398 7 log.go:172] (0xc002f70790) (0xc00208ce60) Stream added, broadcasting: 1 I0605 00:19:29.050170 7 log.go:172] (0xc002f70790) Reply frame received for 1 I0605 00:19:29.050191 7 log.go:172] (0xc002f70790) (0xc002505ae0) Create stream I0605 00:19:29.050198 7 log.go:172] (0xc002f70790) (0xc002505ae0) Stream added, broadcasting: 3 I0605 00:19:29.051365 7 log.go:172] (0xc002f70790) Reply frame received for 3 I0605 00:19:29.051415 7 log.go:172] (0xc002f70790) (0xc0020b7c20) Create stream I0605 00:19:29.051431 7 log.go:172] (0xc002f70790) (0xc0020b7c20) Stream added, broadcasting: 5 I0605 00:19:29.052546 7 log.go:172] (0xc002f70790) Reply frame received for 5 I0605 00:19:29.123484 7 log.go:172] (0xc002f70790) Data frame received for 5 I0605 00:19:29.123553 7 log.go:172] (0xc0020b7c20) (5) Data frame handling I0605 00:19:29.123586 7 log.go:172] (0xc002f70790) Data frame received for 3 I0605 00:19:29.123602 7 log.go:172] (0xc002505ae0) (3) Data frame handling I0605 00:19:29.125491 7 log.go:172] (0xc002f70790) Data frame received for 1 I0605 00:19:29.125517 7 log.go:172] (0xc00208ce60) (1) Data frame handling I0605 00:19:29.125529 7 log.go:172] (0xc00208ce60) (1) Data frame sent I0605 00:19:29.125538 7 log.go:172] (0xc002f70790) (0xc00208ce60) Stream removed, broadcasting: 1 I0605 00:19:29.125550 7 log.go:172] (0xc002f70790) Go away received I0605 00:19:29.125734 7 log.go:172] (0xc002f70790) (0xc00208ce60) Stream removed, broadcasting: 1 I0605 00:19:29.125764 7 log.go:172] (0xc002f70790) (0xc002505ae0) Stream removed, broadcasting: 3 I0605 00:19:29.125778 7 log.go:172] (0xc002f70790) (0xc0020b7c20) Stream removed, broadcasting: 5 Jun 5 00:19:29.125: INFO: Pod exec output: / STEP: Waiting for container to restart Jun 5 00:19:29.129: INFO: Container dapi-container, restarts: 0 Jun 5 00:19:39.136: INFO: Container dapi-container, restarts: 0 Jun 5 00:19:49.152: INFO: Container dapi-container, restarts: 0 Jun 5 00:19:59.134: INFO: Container dapi-container, restarts: 0 Jun 5 00:20:09.134: INFO: Container dapi-container, restarts: 1 Jun 5 00:20:09.134: INFO: Container has restart count: 1 STEP: Rewriting the file Jun 5 00:20:09.134: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-2764 PodName:var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:20:09.134: INFO: >>> kubeConfig: /root/.kube/config I0605 00:20:09.171637 7 log.go:172] (0xc002e56420) (0xc001b63040) Create stream I0605 00:20:09.171666 7 log.go:172] (0xc002e56420) (0xc001b63040) Stream added, broadcasting: 1 I0605 00:20:09.173556 7 log.go:172] (0xc002e56420) Reply frame received for 1 I0605 00:20:09.173590 7 log.go:172] (0xc002e56420) (0xc0020b7d60) Create stream I0605 00:20:09.173601 7 log.go:172] (0xc002e56420) (0xc0020b7d60) Stream added, broadcasting: 3 I0605 00:20:09.174482 7 log.go:172] (0xc002e56420) Reply frame received for 3 I0605 00:20:09.174519 7 log.go:172] (0xc002e56420) (0xc000c77a40) Create stream I0605 00:20:09.174535 7 log.go:172] (0xc002e56420) (0xc000c77a40) Stream added, broadcasting: 5 I0605 00:20:09.176148 7 log.go:172] (0xc002e56420) Reply frame received for 5 I0605 00:20:09.248443 7 log.go:172] (0xc002e56420) Data frame received for 5 I0605 00:20:09.248464 7 log.go:172] (0xc000c77a40) (5) Data frame handling I0605 00:20:09.248489 7 log.go:172] (0xc002e56420) Data frame received for 3 I0605 00:20:09.248509 7 log.go:172] (0xc0020b7d60) (3) Data frame handling I0605 00:20:09.250222 7 log.go:172] (0xc002e56420) Data frame received for 1 I0605 00:20:09.250237 7 log.go:172] (0xc001b63040) (1) Data frame handling I0605 00:20:09.250250 7 log.go:172] (0xc001b63040) (1) Data frame sent I0605 00:20:09.250261 7 log.go:172] (0xc002e56420) (0xc001b63040) Stream removed, broadcasting: 1 I0605 00:20:09.250320 7 log.go:172] (0xc002e56420) Go away received I0605 00:20:09.250347 7 log.go:172] (0xc002e56420) (0xc001b63040) Stream removed, broadcasting: 1 I0605 00:20:09.250364 7 log.go:172] (0xc002e56420) (0xc0020b7d60) Stream removed, broadcasting: 3 I0605 00:20:09.250376 7 log.go:172] (0xc002e56420) (0xc000c77a40) Stream removed, broadcasting: 5 Jun 5 00:20:09.250: INFO: Exec stderr: "" Jun 5 00:20:09.250: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jun 5 00:20:37.258: INFO: Container has restart count: 2 Jun 5 00:21:39.278: INFO: Container restart has stabilized STEP: test for subpath mounted with old value Jun 5 00:21:39.281: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-2764 PodName:var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:21:39.282: INFO: >>> kubeConfig: /root/.kube/config I0605 00:21:39.309595 7 log.go:172] (0xc002f704d0) (0xc001468fa0) Create stream I0605 00:21:39.309625 7 log.go:172] (0xc002f704d0) (0xc001468fa0) Stream added, broadcasting: 1 I0605 00:21:39.311285 7 log.go:172] (0xc002f704d0) Reply frame received for 1 I0605 00:21:39.311308 7 log.go:172] (0xc002f704d0) (0xc00208c960) Create stream I0605 00:21:39.311314 7 log.go:172] (0xc002f704d0) (0xc00208c960) Stream added, broadcasting: 3 I0605 00:21:39.312364 7 log.go:172] (0xc002f704d0) Reply frame received for 3 I0605 00:21:39.312420 7 log.go:172] (0xc002f704d0) (0xc000c61540) Create stream I0605 00:21:39.312442 7 log.go:172] (0xc002f704d0) (0xc000c61540) Stream added, broadcasting: 5 I0605 00:21:39.313673 7 log.go:172] (0xc002f704d0) Reply frame received for 5 I0605 00:21:39.383253 7 log.go:172] (0xc002f704d0) Data frame received for 5 I0605 00:21:39.383290 7 log.go:172] (0xc000c61540) (5) Data frame handling I0605 00:21:39.383307 7 log.go:172] (0xc002f704d0) Data frame received for 3 I0605 00:21:39.383314 7 log.go:172] (0xc00208c960) (3) Data frame handling I0605 00:21:39.384206 7 log.go:172] (0xc002f704d0) Data frame received for 1 I0605 00:21:39.384281 7 log.go:172] (0xc001468fa0) (1) Data frame handling I0605 00:21:39.384320 7 log.go:172] (0xc001468fa0) (1) Data frame sent I0605 00:21:39.384345 7 log.go:172] (0xc002f704d0) (0xc001468fa0) Stream removed, broadcasting: 1 I0605 00:21:39.384368 7 log.go:172] (0xc002f704d0) Go away received I0605 00:21:39.384470 7 log.go:172] (0xc002f704d0) (0xc001468fa0) Stream removed, broadcasting: 1 I0605 00:21:39.384485 7 log.go:172] (0xc002f704d0) (0xc00208c960) Stream removed, broadcasting: 3 I0605 00:21:39.384493 7 log.go:172] (0xc002f704d0) (0xc000c61540) Stream removed, broadcasting: 5 Jun 5 00:21:39.404: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-2764 PodName:var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:21:39.404: INFO: >>> kubeConfig: /root/.kube/config I0605 00:21:39.436592 7 log.go:172] (0xc002f70b00) (0xc001469d60) Create stream I0605 00:21:39.436618 7 log.go:172] (0xc002f70b00) (0xc001469d60) Stream added, broadcasting: 1 I0605 00:21:39.438565 7 log.go:172] (0xc002f70b00) Reply frame received for 1 I0605 00:21:39.438624 7 log.go:172] (0xc002f70b00) (0xc000c77b80) Create stream I0605 00:21:39.438654 7 log.go:172] (0xc002f70b00) (0xc000c77b80) Stream added, broadcasting: 3 I0605 00:21:39.439528 7 log.go:172] (0xc002f70b00) Reply frame received for 3 I0605 00:21:39.439555 7 log.go:172] (0xc002f70b00) (0xc001469ea0) Create stream I0605 00:21:39.439564 7 log.go:172] (0xc002f70b00) (0xc001469ea0) Stream added, broadcasting: 5 I0605 00:21:39.440417 7 log.go:172] (0xc002f70b00) Reply frame received for 5 I0605 00:21:39.510976 7 log.go:172] (0xc002f70b00) Data frame received for 5 I0605 00:21:39.511020 7 log.go:172] (0xc001469ea0) (5) Data frame handling I0605 00:21:39.511077 7 log.go:172] (0xc002f70b00) Data frame received for 3 I0605 00:21:39.511114 7 log.go:172] (0xc000c77b80) (3) Data frame handling I0605 00:21:39.512150 7 log.go:172] (0xc002f70b00) Data frame received for 1 I0605 00:21:39.512192 7 log.go:172] (0xc001469d60) (1) Data frame handling I0605 00:21:39.512213 7 log.go:172] (0xc001469d60) (1) Data frame sent I0605 00:21:39.512224 7 log.go:172] (0xc002f70b00) (0xc001469d60) Stream removed, broadcasting: 1 I0605 00:21:39.512305 7 log.go:172] (0xc002f70b00) (0xc001469d60) Stream removed, broadcasting: 1 I0605 00:21:39.512321 7 log.go:172] (0xc002f70b00) (0xc000c77b80) Stream removed, broadcasting: 3 I0605 00:21:39.512350 7 log.go:172] (0xc002f70b00) Go away received I0605 00:21:39.512405 7 log.go:172] (0xc002f70b00) (0xc001469ea0) Stream removed, broadcasting: 5 Jun 5 00:21:39.512: INFO: Deleting pod "var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d" in namespace "var-expansion-2764" Jun 5 00:21:39.518: INFO: Wait up to 5m0s for pod "var-expansion-bd469b98-b736-47fe-9ef8-9ee4b8aad70d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:15.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2764" for this suite. • [SLOW TEST:175.264 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":151,"skipped":2242,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:15.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-5243493a-9be5-4800-870a-d20a5310971f STEP: Creating a pod to test consume secrets Jun 5 00:22:15.630: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a" in namespace "projected-6072" to be "Succeeded or Failed" Jun 5 00:22:15.642: INFO: Pod "pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.688608ms Jun 5 00:22:17.644: INFO: Pod "pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014434595s Jun 5 00:22:19.649: INFO: Pod "pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a": Phase="Running", Reason="", readiness=true. Elapsed: 4.018697969s Jun 5 00:22:21.653: INFO: Pod "pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023186173s STEP: Saw pod success Jun 5 00:22:21.653: INFO: Pod "pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a" satisfied condition "Succeeded or Failed" Jun 5 00:22:21.656: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a container projected-secret-volume-test: STEP: delete the pod Jun 5 00:22:21.710: INFO: Waiting for pod pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a to disappear Jun 5 00:22:21.724: INFO: Pod pod-projected-secrets-a0066a11-c854-43f6-bbee-7f1c79b1c00a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:21.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6072" for this suite. • [SLOW TEST:6.175 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2243,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:21.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:22:21.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3123' Jun 5 00:22:22.285: INFO: stderr: "" Jun 5 00:22:22.285: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jun 5 00:22:22.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3123' Jun 5 00:22:22.585: INFO: stderr: "" Jun 5 00:22:22.585: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 5 00:22:23.589: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:22:23.589: INFO: Found 0 / 1 Jun 5 00:22:24.595: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:22:24.595: INFO: Found 0 / 1 Jun 5 00:22:25.590: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:22:25.590: INFO: Found 1 / 1 Jun 5 00:22:25.590: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 5 00:22:25.593: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:22:25.593: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 5 00:22:25.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-ltgtg --namespace=kubectl-3123' Jun 5 00:22:25.723: INFO: stderr: "" Jun 5 00:22:25.723: INFO: stdout: "Name: agnhost-master-ltgtg\nNamespace: kubectl-3123\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Fri, 05 Jun 2020 00:22:22 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.87\nIPs:\n IP: 10.244.2.87\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://a71f9db8e355b015935ac5677223afbb1b0ba7d02bc6d83df9778f501196436f\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 05 Jun 2020 00:22:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vql65 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vql65:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vql65\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-3123/agnhost-master-ltgtg to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Jun 5 00:22:25.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3123' Jun 5 00:22:25.865: INFO: stderr: "" Jun 5 00:22:25.866: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3123\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-ltgtg\n" Jun 5 00:22:25.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3123' Jun 5 00:22:25.974: INFO: stderr: "" Jun 5 00:22:25.974: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3123\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.104.71.237\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.87:6379\nSession Affinity: None\nEvents: \n" Jun 5 00:22:25.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' Jun 5 00:22:26.113: INFO: stderr: "" Jun 5 00:22:26.113: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Fri, 05 Jun 2020 00:22:17 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 05 Jun 2020 00:18:00 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 05 Jun 2020 00:18:00 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 05 Jun 2020 00:18:00 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 05 Jun 2020 00:18:00 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 36d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 36d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 36d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 36d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jun 5 00:22:26.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-3123' Jun 5 00:22:26.226: INFO: stderr: "" Jun 5 00:22:26.226: INFO: stdout: "Name: kubectl-3123\nLabels: e2e-framework=kubectl\n e2e-run=4bea8d16-e345-4d47-bfb5-c0567c47b5c7\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:26.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3123" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":153,"skipped":2254,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:26.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 5 00:22:26.331: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 5 00:22:31.334: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:31.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2303" for this suite. • [SLOW TEST:5.257 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":154,"skipped":2265,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:31.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:31.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7451" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":155,"skipped":2280,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:31.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-c8ddd987-96b8-457d-93f1-34ec8db03573 STEP: Creating secret with name secret-projected-all-test-volume-5c7c7107-dffd-4af2-a29f-5b196fccd1a5 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 5 00:22:31.892: INFO: Waiting up to 5m0s for pod "projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698" in namespace "projected-6455" to be "Succeeded or Failed" Jun 5 00:22:31.895: INFO: Pod "projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698": Phase="Pending", Reason="", readiness=false. Elapsed: 3.030844ms Jun 5 00:22:33.974: INFO: Pod "projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082196679s Jun 5 00:22:36.022: INFO: Pod "projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129847764s Jun 5 00:22:38.051: INFO: Pod "projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.158815278s STEP: Saw pod success Jun 5 00:22:38.051: INFO: Pod "projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698" satisfied condition "Succeeded or Failed" Jun 5 00:22:38.054: INFO: Trying to get logs from node latest-worker pod projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698 container projected-all-volume-test: STEP: delete the pod Jun 5 00:22:38.208: INFO: Waiting for pod projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698 to disappear Jun 5 00:22:38.211: INFO: Pod projected-volume-6788581d-94cf-45a6-93b4-cd9e8ef03698 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:38.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6455" for this suite. • [SLOW TEST:6.436 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2300,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:38.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:22:38.307: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 5 00:22:41.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4228 create -f -' Jun 5 00:22:44.339: INFO: stderr: "" Jun 5 00:22:44.339: INFO: stdout: "e2e-test-crd-publish-openapi-7630-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 5 00:22:44.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4228 delete e2e-test-crd-publish-openapi-7630-crds test-cr' Jun 5 00:22:44.468: INFO: stderr: "" Jun 5 00:22:44.468: INFO: stdout: "e2e-test-crd-publish-openapi-7630-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 5 00:22:44.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4228 apply -f -' Jun 5 00:22:44.710: INFO: stderr: "" Jun 5 00:22:44.711: INFO: stdout: "e2e-test-crd-publish-openapi-7630-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 5 00:22:44.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4228 delete e2e-test-crd-publish-openapi-7630-crds test-cr' Jun 5 00:22:44.850: INFO: stderr: "" Jun 5 00:22:44.850: INFO: stdout: "e2e-test-crd-publish-openapi-7630-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 5 00:22:44.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7630-crds' Jun 5 00:22:45.093: INFO: stderr: "" Jun 5 00:22:45.093: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7630-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:48.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4228" for this suite. • [SLOW TEST:9.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":157,"skipped":2304,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:48.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0605 00:22:58.124869 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 5 00:22:58.124: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:22:58.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2701" for this suite. • [SLOW TEST:10.078 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":158,"skipped":2316,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:22:58.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-44daaacc-a8f4-4218-b596-10b4ed831e3f STEP: Creating a pod to test consume secrets Jun 5 00:22:58.244: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0" in namespace "projected-273" to be "Succeeded or Failed" Jun 5 00:22:58.259: INFO: Pod "pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.979021ms Jun 5 00:23:00.264: INFO: Pod "pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019318879s Jun 5 00:23:02.267: INFO: Pod "pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023013683s STEP: Saw pod success Jun 5 00:23:02.267: INFO: Pod "pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0" satisfied condition "Succeeded or Failed" Jun 5 00:23:02.270: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0 container projected-secret-volume-test: STEP: delete the pod Jun 5 00:23:02.309: INFO: Waiting for pod pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0 to disappear Jun 5 00:23:02.319: INFO: Pod pod-projected-secrets-2483dc6e-ae1a-4836-bfa1-18e94e2dccd0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:02.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-273" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:02.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jun 5 00:23:02.541: INFO: namespace kubectl-7505 Jun 5 00:23:02.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7505' Jun 5 00:23:02.874: INFO: stderr: "" Jun 5 00:23:02.874: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 5 00:23:03.879: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:23:03.879: INFO: Found 0 / 1 Jun 5 00:23:04.878: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:23:04.878: INFO: Found 0 / 1 Jun 5 00:23:05.879: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:23:05.879: INFO: Found 0 / 1 Jun 5 00:23:06.879: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:23:06.880: INFO: Found 1 / 1 Jun 5 00:23:06.880: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 5 00:23:06.884: INFO: Selector matched 1 pods for map[app:agnhost] Jun 5 00:23:06.884: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 5 00:23:06.884: INFO: wait on agnhost-master startup in kubectl-7505 Jun 5 00:23:06.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-s9zl8 agnhost-master --namespace=kubectl-7505' Jun 5 00:23:07.004: INFO: stderr: "" Jun 5 00:23:07.004: INFO: stdout: "Paused\n" STEP: exposing RC Jun 5 00:23:07.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7505' Jun 5 00:23:07.192: INFO: stderr: "" Jun 5 00:23:07.192: INFO: stdout: "service/rm2 exposed\n" Jun 5 00:23:07.260: INFO: Service rm2 in namespace kubectl-7505 found. STEP: exposing service Jun 5 00:23:09.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7505' Jun 5 00:23:09.415: INFO: stderr: "" Jun 5 00:23:09.415: INFO: stdout: "service/rm3 exposed\n" Jun 5 00:23:09.435: INFO: Service rm3 in namespace kubectl-7505 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:11.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7505" for this suite. • [SLOW TEST:9.119 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":160,"skipped":2348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:11.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2fbccddc-5413-460d-83bd-2f067c498c15 STEP: Creating a pod to test consume configMaps Jun 5 00:23:11.564: INFO: Waiting up to 5m0s for pod "pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190" in namespace "configmap-3673" to be "Succeeded or Failed" Jun 5 00:23:11.578: INFO: Pod "pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026567ms Jun 5 00:23:13.582: INFO: Pod "pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017919936s Jun 5 00:23:15.586: INFO: Pod "pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022002516s STEP: Saw pod success Jun 5 00:23:15.586: INFO: Pod "pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190" satisfied condition "Succeeded or Failed" Jun 5 00:23:15.589: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190 container configmap-volume-test: STEP: delete the pod Jun 5 00:23:15.666: INFO: Waiting for pod pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190 to disappear Jun 5 00:23:15.679: INFO: Pod pod-configmaps-4be98839-e9db-4d13-8c8a-c21720128190 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:15.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3673" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2396,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:15.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 5 00:23:15.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4780 /api/v1/namespaces/watch-4780/configmaps/e2e-watch-test-watch-closed ddb81a33-cf1f-46ae-8b8d-148dd728c56e 10337687 0 2020-06-05 00:23:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-05 00:23:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:23:15.830: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4780 /api/v1/namespaces/watch-4780/configmaps/e2e-watch-test-watch-closed ddb81a33-cf1f-46ae-8b8d-148dd728c56e 10337688 0 2020-06-05 00:23:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-05 00:23:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 5 00:23:15.880: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4780 /api/v1/namespaces/watch-4780/configmaps/e2e-watch-test-watch-closed ddb81a33-cf1f-46ae-8b8d-148dd728c56e 10337689 0 2020-06-05 00:23:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-05 00:23:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:23:15.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4780 /api/v1/namespaces/watch-4780/configmaps/e2e-watch-test-watch-closed ddb81a33-cf1f-46ae-8b8d-148dd728c56e 10337690 0 2020-06-05 00:23:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-05 00:23:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:15.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4780" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":162,"skipped":2399,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:15.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 5 00:23:21.151: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:21.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5012" for this suite. • [SLOW TEST:5.310 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:21.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:21.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1537" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":164,"skipped":2452,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:21.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:23:21.359: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:22.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4273" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":165,"skipped":2468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:22.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-073396f5-6ef5-4b38-9c23-37c593438929 STEP: Creating a pod to test consume secrets Jun 5 00:23:24.435: INFO: Waiting up to 5m0s for pod "pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b" in namespace "secrets-8930" to be "Succeeded or Failed" Jun 5 00:23:24.455: INFO: Pod "pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.683839ms Jun 5 00:23:26.579: INFO: Pod "pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144439117s Jun 5 00:23:28.583: INFO: Pod "pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148448562s STEP: Saw pod success Jun 5 00:23:28.583: INFO: Pod "pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b" satisfied condition "Succeeded or Failed" Jun 5 00:23:28.586: INFO: Trying to get logs from node latest-worker pod pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b container secret-volume-test: STEP: delete the pod Jun 5 00:23:28.709: INFO: Waiting for pod pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b to disappear Jun 5 00:23:28.782: INFO: Pod pod-secrets-2d3708d6-0aec-4560-a190-eabd1722212b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:23:28.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8930" for this suite. • [SLOW TEST:6.393 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2529,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:23:28.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4909 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4909 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4909 Jun 5 00:23:29.010: INFO: Found 0 stateful pods, waiting for 1 Jun 5 00:23:39.015: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 5 00:23:39.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:23:39.263: INFO: stderr: "I0605 00:23:39.158493 1682 log.go:172] (0xc000986000) (0xc0003fb2c0) Create stream\nI0605 00:23:39.158564 1682 log.go:172] (0xc000986000) (0xc0003fb2c0) Stream added, broadcasting: 1\nI0605 00:23:39.160743 1682 log.go:172] (0xc000986000) Reply frame received for 1\nI0605 00:23:39.160768 1682 log.go:172] (0xc000986000) (0xc00037c280) Create stream\nI0605 00:23:39.160775 1682 log.go:172] (0xc000986000) (0xc00037c280) Stream added, broadcasting: 3\nI0605 00:23:39.161748 1682 log.go:172] (0xc000986000) Reply frame received for 3\nI0605 00:23:39.161788 1682 log.go:172] (0xc000986000) (0xc00034ae60) Create stream\nI0605 00:23:39.161805 1682 log.go:172] (0xc000986000) (0xc00034ae60) Stream added, broadcasting: 5\nI0605 00:23:39.162538 1682 log.go:172] (0xc000986000) Reply frame received for 5\nI0605 00:23:39.232475 1682 log.go:172] (0xc000986000) Data frame received for 5\nI0605 00:23:39.232498 1682 log.go:172] (0xc00034ae60) (5) Data frame handling\nI0605 00:23:39.232510 1682 log.go:172] (0xc00034ae60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:23:39.256992 1682 log.go:172] (0xc000986000) Data frame received for 5\nI0605 00:23:39.257036 1682 log.go:172] (0xc00034ae60) (5) Data frame handling\nI0605 00:23:39.257067 1682 log.go:172] (0xc000986000) Data frame received for 3\nI0605 00:23:39.257099 1682 log.go:172] (0xc00037c280) (3) Data frame handling\nI0605 00:23:39.257173 1682 log.go:172] (0xc00037c280) (3) Data frame sent\nI0605 00:23:39.257185 1682 log.go:172] (0xc000986000) Data frame received for 3\nI0605 00:23:39.257191 1682 log.go:172] (0xc00037c280) (3) Data frame handling\nI0605 00:23:39.258940 1682 log.go:172] (0xc000986000) Data frame received for 1\nI0605 00:23:39.258967 1682 log.go:172] (0xc0003fb2c0) (1) Data frame handling\nI0605 00:23:39.258993 1682 log.go:172] (0xc0003fb2c0) (1) Data frame sent\nI0605 00:23:39.259014 1682 log.go:172] (0xc000986000) (0xc0003fb2c0) Stream removed, broadcasting: 1\nI0605 00:23:39.259032 1682 log.go:172] (0xc000986000) Go away received\nI0605 00:23:39.259402 1682 log.go:172] (0xc000986000) (0xc0003fb2c0) Stream removed, broadcasting: 1\nI0605 00:23:39.259459 1682 log.go:172] (0xc000986000) (0xc00037c280) Stream removed, broadcasting: 3\nI0605 00:23:39.259513 1682 log.go:172] (0xc000986000) (0xc00034ae60) Stream removed, broadcasting: 5\n" Jun 5 00:23:39.263: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:23:39.263: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:23:39.267: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 5 00:23:49.271: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:23:49.271: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:23:49.298: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999661s Jun 5 00:23:50.302: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982130314s Jun 5 00:23:51.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977385038s Jun 5 00:23:52.322: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.972179889s Jun 5 00:23:53.327: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.957743032s Jun 5 00:23:54.330: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.952636303s Jun 5 00:23:55.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.949403003s Jun 5 00:23:56.376: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.944552037s Jun 5 00:23:57.380: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.903788741s Jun 5 00:23:58.394: INFO: Verifying statefulset ss doesn't scale past 1 for another 899.995793ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4909 Jun 5 00:23:59.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:23:59.625: INFO: stderr: "I0605 00:23:59.535084 1703 log.go:172] (0xc0009b34a0) (0xc0002692c0) Create stream\nI0605 00:23:59.535161 1703 log.go:172] (0xc0009b34a0) (0xc0002692c0) Stream added, broadcasting: 1\nI0605 00:23:59.540258 1703 log.go:172] (0xc0009b34a0) Reply frame received for 1\nI0605 00:23:59.540301 1703 log.go:172] (0xc0009b34a0) (0xc0006d46e0) Create stream\nI0605 00:23:59.540313 1703 log.go:172] (0xc0009b34a0) (0xc0006d46e0) Stream added, broadcasting: 3\nI0605 00:23:59.541343 1703 log.go:172] (0xc0009b34a0) Reply frame received for 3\nI0605 00:23:59.541372 1703 log.go:172] (0xc0009b34a0) (0xc000558640) Create stream\nI0605 00:23:59.541381 1703 log.go:172] (0xc0009b34a0) (0xc000558640) Stream added, broadcasting: 5\nI0605 00:23:59.542260 1703 log.go:172] (0xc0009b34a0) Reply frame received for 5\nI0605 00:23:59.617904 1703 log.go:172] (0xc0009b34a0) Data frame received for 3\nI0605 00:23:59.617947 1703 log.go:172] (0xc0006d46e0) (3) Data frame handling\nI0605 00:23:59.617972 1703 log.go:172] (0xc0006d46e0) (3) Data frame sent\nI0605 00:23:59.617999 1703 log.go:172] (0xc0009b34a0) Data frame received for 5\nI0605 00:23:59.618046 1703 log.go:172] (0xc000558640) (5) Data frame handling\nI0605 00:23:59.618069 1703 log.go:172] (0xc000558640) (5) Data frame sent\nI0605 00:23:59.618086 1703 log.go:172] (0xc0009b34a0) Data frame received for 5\nI0605 00:23:59.618099 1703 log.go:172] (0xc000558640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0605 00:23:59.618113 1703 log.go:172] (0xc0009b34a0) Data frame received for 3\nI0605 00:23:59.618125 1703 log.go:172] (0xc0006d46e0) (3) Data frame handling\nI0605 00:23:59.619549 1703 log.go:172] (0xc0009b34a0) Data frame received for 1\nI0605 00:23:59.619576 1703 log.go:172] (0xc0002692c0) (1) Data frame handling\nI0605 00:23:59.619588 1703 log.go:172] (0xc0002692c0) (1) Data frame sent\nI0605 00:23:59.619604 1703 log.go:172] (0xc0009b34a0) (0xc0002692c0) Stream removed, broadcasting: 1\nI0605 00:23:59.619661 1703 log.go:172] (0xc0009b34a0) Go away received\nI0605 00:23:59.619995 1703 log.go:172] (0xc0009b34a0) (0xc0002692c0) Stream removed, broadcasting: 1\nI0605 00:23:59.620021 1703 log.go:172] (0xc0009b34a0) (0xc0006d46e0) Stream removed, broadcasting: 3\nI0605 00:23:59.620036 1703 log.go:172] (0xc0009b34a0) (0xc000558640) Stream removed, broadcasting: 5\n" Jun 5 00:23:59.625: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:23:59.625: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:23:59.629: INFO: Found 1 stateful pods, waiting for 3 Jun 5 00:24:09.635: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:24:09.635: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:24:09.635: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 5 00:24:09.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:24:09.895: INFO: stderr: "I0605 00:24:09.798319 1723 log.go:172] (0xc0009636b0) (0xc000a680a0) Create stream\nI0605 00:24:09.798381 1723 log.go:172] (0xc0009636b0) (0xc000a680a0) Stream added, broadcasting: 1\nI0605 00:24:09.803565 1723 log.go:172] (0xc0009636b0) Reply frame received for 1\nI0605 00:24:09.803612 1723 log.go:172] (0xc0009636b0) (0xc0000e0dc0) Create stream\nI0605 00:24:09.803623 1723 log.go:172] (0xc0009636b0) (0xc0000e0dc0) Stream added, broadcasting: 3\nI0605 00:24:09.804398 1723 log.go:172] (0xc0009636b0) Reply frame received for 3\nI0605 00:24:09.804419 1723 log.go:172] (0xc0009636b0) (0xc00069dea0) Create stream\nI0605 00:24:09.804426 1723 log.go:172] (0xc0009636b0) (0xc00069dea0) Stream added, broadcasting: 5\nI0605 00:24:09.805337 1723 log.go:172] (0xc0009636b0) Reply frame received for 5\nI0605 00:24:09.885858 1723 log.go:172] (0xc0009636b0) Data frame received for 3\nI0605 00:24:09.885887 1723 log.go:172] (0xc0000e0dc0) (3) Data frame handling\nI0605 00:24:09.885907 1723 log.go:172] (0xc0000e0dc0) (3) Data frame sent\nI0605 00:24:09.885917 1723 log.go:172] (0xc0009636b0) Data frame received for 3\nI0605 00:24:09.885926 1723 log.go:172] (0xc0000e0dc0) (3) Data frame handling\nI0605 00:24:09.886054 1723 log.go:172] (0xc0009636b0) Data frame received for 5\nI0605 00:24:09.886083 1723 log.go:172] (0xc00069dea0) (5) Data frame handling\nI0605 00:24:09.886105 1723 log.go:172] (0xc00069dea0) (5) Data frame sent\nI0605 00:24:09.886120 1723 log.go:172] (0xc0009636b0) Data frame received for 5\nI0605 00:24:09.886137 1723 log.go:172] (0xc00069dea0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:24:09.887672 1723 log.go:172] (0xc0009636b0) Data frame received for 1\nI0605 00:24:09.887692 1723 log.go:172] (0xc000a680a0) (1) Data frame handling\nI0605 00:24:09.887700 1723 log.go:172] (0xc000a680a0) (1) Data frame sent\nI0605 00:24:09.887709 1723 log.go:172] (0xc0009636b0) (0xc000a680a0) Stream removed, broadcasting: 1\nI0605 00:24:09.887800 1723 log.go:172] (0xc0009636b0) Go away received\nI0605 00:24:09.888008 1723 log.go:172] (0xc0009636b0) (0xc000a680a0) Stream removed, broadcasting: 1\nI0605 00:24:09.888026 1723 log.go:172] (0xc0009636b0) (0xc0000e0dc0) Stream removed, broadcasting: 3\nI0605 00:24:09.888033 1723 log.go:172] (0xc0009636b0) (0xc00069dea0) Stream removed, broadcasting: 5\n" Jun 5 00:24:09.895: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:24:09.895: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:24:09.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:24:10.153: INFO: stderr: "I0605 00:24:10.028670 1744 log.go:172] (0xc000b86fd0) (0xc000a6c280) Create stream\nI0605 00:24:10.028756 1744 log.go:172] (0xc000b86fd0) (0xc000a6c280) Stream added, broadcasting: 1\nI0605 00:24:10.034786 1744 log.go:172] (0xc000b86fd0) Reply frame received for 1\nI0605 00:24:10.034828 1744 log.go:172] (0xc000b86fd0) (0xc000851040) Create stream\nI0605 00:24:10.034842 1744 log.go:172] (0xc000b86fd0) (0xc000851040) Stream added, broadcasting: 3\nI0605 00:24:10.035759 1744 log.go:172] (0xc000b86fd0) Reply frame received for 3\nI0605 00:24:10.035808 1744 log.go:172] (0xc000b86fd0) (0xc000834e60) Create stream\nI0605 00:24:10.035826 1744 log.go:172] (0xc000b86fd0) (0xc000834e60) Stream added, broadcasting: 5\nI0605 00:24:10.036900 1744 log.go:172] (0xc000b86fd0) Reply frame received for 5\nI0605 00:24:10.111761 1744 log.go:172] (0xc000b86fd0) Data frame received for 5\nI0605 00:24:10.111781 1744 log.go:172] (0xc000834e60) (5) Data frame handling\nI0605 00:24:10.111795 1744 log.go:172] (0xc000834e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:24:10.145451 1744 log.go:172] (0xc000b86fd0) Data frame received for 3\nI0605 00:24:10.145486 1744 log.go:172] (0xc000851040) (3) Data frame handling\nI0605 00:24:10.145505 1744 log.go:172] (0xc000851040) (3) Data frame sent\nI0605 00:24:10.145512 1744 log.go:172] (0xc000b86fd0) Data frame received for 3\nI0605 00:24:10.145519 1744 log.go:172] (0xc000851040) (3) Data frame handling\nI0605 00:24:10.145561 1744 log.go:172] (0xc000b86fd0) Data frame received for 5\nI0605 00:24:10.145579 1744 log.go:172] (0xc000834e60) (5) Data frame handling\nI0605 00:24:10.147517 1744 log.go:172] (0xc000b86fd0) Data frame received for 1\nI0605 00:24:10.147534 1744 log.go:172] (0xc000a6c280) (1) Data frame handling\nI0605 00:24:10.147550 1744 log.go:172] (0xc000a6c280) (1) Data frame sent\nI0605 00:24:10.147564 1744 log.go:172] (0xc000b86fd0) (0xc000a6c280) Stream removed, broadcasting: 1\nI0605 00:24:10.147579 1744 log.go:172] (0xc000b86fd0) Go away received\nI0605 00:24:10.147870 1744 log.go:172] (0xc000b86fd0) (0xc000a6c280) Stream removed, broadcasting: 1\nI0605 00:24:10.147894 1744 log.go:172] (0xc000b86fd0) (0xc000851040) Stream removed, broadcasting: 3\nI0605 00:24:10.147904 1744 log.go:172] (0xc000b86fd0) (0xc000834e60) Stream removed, broadcasting: 5\n" Jun 5 00:24:10.153: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:24:10.153: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:24:10.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:24:10.436: INFO: stderr: "I0605 00:24:10.339922 1766 log.go:172] (0xc000af5340) (0xc000af8140) Create stream\nI0605 00:24:10.339977 1766 log.go:172] (0xc000af5340) (0xc000af8140) Stream added, broadcasting: 1\nI0605 00:24:10.344742 1766 log.go:172] (0xc000af5340) Reply frame received for 1\nI0605 00:24:10.344777 1766 log.go:172] (0xc000af5340) (0xc00083fea0) Create stream\nI0605 00:24:10.344789 1766 log.go:172] (0xc000af5340) (0xc00083fea0) Stream added, broadcasting: 3\nI0605 00:24:10.345825 1766 log.go:172] (0xc000af5340) Reply frame received for 3\nI0605 00:24:10.345858 1766 log.go:172] (0xc000af5340) (0xc0006721e0) Create stream\nI0605 00:24:10.345873 1766 log.go:172] (0xc000af5340) (0xc0006721e0) Stream added, broadcasting: 5\nI0605 00:24:10.346803 1766 log.go:172] (0xc000af5340) Reply frame received for 5\nI0605 00:24:10.405317 1766 log.go:172] (0xc000af5340) Data frame received for 5\nI0605 00:24:10.405348 1766 log.go:172] (0xc0006721e0) (5) Data frame handling\nI0605 00:24:10.405360 1766 log.go:172] (0xc0006721e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:24:10.427985 1766 log.go:172] (0xc000af5340) Data frame received for 3\nI0605 00:24:10.428008 1766 log.go:172] (0xc00083fea0) (3) Data frame handling\nI0605 00:24:10.428020 1766 log.go:172] (0xc00083fea0) (3) Data frame sent\nI0605 00:24:10.428028 1766 log.go:172] (0xc000af5340) Data frame received for 3\nI0605 00:24:10.428036 1766 log.go:172] (0xc00083fea0) (3) Data frame handling\nI0605 00:24:10.428256 1766 log.go:172] (0xc000af5340) Data frame received for 5\nI0605 00:24:10.428276 1766 log.go:172] (0xc0006721e0) (5) Data frame handling\nI0605 00:24:10.430205 1766 log.go:172] (0xc000af5340) Data frame received for 1\nI0605 00:24:10.430224 1766 log.go:172] (0xc000af8140) (1) Data frame handling\nI0605 00:24:10.430236 1766 log.go:172] (0xc000af8140) (1) Data frame sent\nI0605 00:24:10.430364 1766 log.go:172] (0xc000af5340) (0xc000af8140) Stream removed, broadcasting: 1\nI0605 00:24:10.430563 1766 log.go:172] (0xc000af5340) Go away received\nI0605 00:24:10.430870 1766 log.go:172] (0xc000af5340) (0xc000af8140) Stream removed, broadcasting: 1\nI0605 00:24:10.430894 1766 log.go:172] (0xc000af5340) (0xc00083fea0) Stream removed, broadcasting: 3\nI0605 00:24:10.430922 1766 log.go:172] (0xc000af5340) (0xc0006721e0) Stream removed, broadcasting: 5\n" Jun 5 00:24:10.436: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:24:10.436: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:24:10.436: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:24:10.439: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 5 00:24:20.465: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:24:20.465: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:24:20.465: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:24:20.479: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999976s Jun 5 00:24:21.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99193232s Jun 5 00:24:22.490: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986352481s Jun 5 00:24:23.495: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980645237s Jun 5 00:24:24.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975968119s Jun 5 00:24:25.505: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970785056s Jun 5 00:24:26.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965452831s Jun 5 00:24:27.514: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960727509s Jun 5 00:24:28.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956217816s Jun 5 00:24:29.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.531326ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4909 Jun 5 00:24:30.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:24:30.763: INFO: stderr: "I0605 00:24:30.667806 1787 log.go:172] (0xc0006a0160) (0xc00061e6e0) Create stream\nI0605 00:24:30.667855 1787 log.go:172] (0xc0006a0160) (0xc00061e6e0) Stream added, broadcasting: 1\nI0605 00:24:30.669827 1787 log.go:172] (0xc0006a0160) Reply frame received for 1\nI0605 00:24:30.669872 1787 log.go:172] (0xc0006a0160) (0xc000452f00) Create stream\nI0605 00:24:30.669883 1787 log.go:172] (0xc0006a0160) (0xc000452f00) Stream added, broadcasting: 3\nI0605 00:24:30.670568 1787 log.go:172] (0xc0006a0160) Reply frame received for 3\nI0605 00:24:30.670596 1787 log.go:172] (0xc0006a0160) (0xc00061efa0) Create stream\nI0605 00:24:30.670606 1787 log.go:172] (0xc0006a0160) (0xc00061efa0) Stream added, broadcasting: 5\nI0605 00:24:30.671179 1787 log.go:172] (0xc0006a0160) Reply frame received for 5\nI0605 00:24:30.755992 1787 log.go:172] (0xc0006a0160) Data frame received for 3\nI0605 00:24:30.756023 1787 log.go:172] (0xc000452f00) (3) Data frame handling\nI0605 00:24:30.756032 1787 log.go:172] (0xc000452f00) (3) Data frame sent\nI0605 00:24:30.756038 1787 log.go:172] (0xc0006a0160) Data frame received for 3\nI0605 00:24:30.756043 1787 log.go:172] (0xc000452f00) (3) Data frame handling\nI0605 00:24:30.756074 1787 log.go:172] (0xc0006a0160) Data frame received for 5\nI0605 00:24:30.756107 1787 log.go:172] (0xc00061efa0) (5) Data frame handling\nI0605 00:24:30.756140 1787 log.go:172] (0xc00061efa0) (5) Data frame sent\nI0605 00:24:30.756158 1787 log.go:172] (0xc0006a0160) Data frame received for 5\nI0605 00:24:30.756174 1787 log.go:172] (0xc00061efa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0605 00:24:30.757739 1787 log.go:172] (0xc0006a0160) Data frame received for 1\nI0605 00:24:30.757752 1787 log.go:172] (0xc00061e6e0) (1) Data frame handling\nI0605 00:24:30.757764 1787 log.go:172] (0xc00061e6e0) (1) Data frame sent\nI0605 00:24:30.757776 1787 log.go:172] (0xc0006a0160) (0xc00061e6e0) Stream removed, broadcasting: 1\nI0605 00:24:30.757857 1787 log.go:172] (0xc0006a0160) Go away received\nI0605 00:24:30.758040 1787 log.go:172] (0xc0006a0160) (0xc00061e6e0) Stream removed, broadcasting: 1\nI0605 00:24:30.758053 1787 log.go:172] (0xc0006a0160) (0xc000452f00) Stream removed, broadcasting: 3\nI0605 00:24:30.758059 1787 log.go:172] (0xc0006a0160) (0xc00061efa0) Stream removed, broadcasting: 5\n" Jun 5 00:24:30.763: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:24:30.763: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:24:30.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:24:30.972: INFO: stderr: "I0605 00:24:30.903884 1808 log.go:172] (0xc0005200b0) (0xc0004fcf00) Create stream\nI0605 00:24:30.903950 1808 log.go:172] (0xc0005200b0) (0xc0004fcf00) Stream added, broadcasting: 1\nI0605 00:24:30.906105 1808 log.go:172] (0xc0005200b0) Reply frame received for 1\nI0605 00:24:30.906153 1808 log.go:172] (0xc0005200b0) (0xc000bca000) Create stream\nI0605 00:24:30.906169 1808 log.go:172] (0xc0005200b0) (0xc000bca000) Stream added, broadcasting: 3\nI0605 00:24:30.907148 1808 log.go:172] (0xc0005200b0) Reply frame received for 3\nI0605 00:24:30.907175 1808 log.go:172] (0xc0005200b0) (0xc000712be0) Create stream\nI0605 00:24:30.907184 1808 log.go:172] (0xc0005200b0) (0xc000712be0) Stream added, broadcasting: 5\nI0605 00:24:30.908137 1808 log.go:172] (0xc0005200b0) Reply frame received for 5\nI0605 00:24:30.963373 1808 log.go:172] (0xc0005200b0) Data frame received for 3\nI0605 00:24:30.963424 1808 log.go:172] (0xc000bca000) (3) Data frame handling\nI0605 00:24:30.963451 1808 log.go:172] (0xc000bca000) (3) Data frame sent\nI0605 00:24:30.963470 1808 log.go:172] (0xc0005200b0) Data frame received for 3\nI0605 00:24:30.963478 1808 log.go:172] (0xc000bca000) (3) Data frame handling\nI0605 00:24:30.963529 1808 log.go:172] (0xc0005200b0) Data frame received for 5\nI0605 00:24:30.963579 1808 log.go:172] (0xc000712be0) (5) Data frame handling\nI0605 00:24:30.963601 1808 log.go:172] (0xc000712be0) (5) Data frame sent\nI0605 00:24:30.963626 1808 log.go:172] (0xc0005200b0) Data frame received for 5\nI0605 00:24:30.963636 1808 log.go:172] (0xc000712be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0605 00:24:30.967605 1808 log.go:172] (0xc0005200b0) Data frame received for 1\nI0605 00:24:30.967629 1808 log.go:172] (0xc0004fcf00) (1) Data frame handling\nI0605 00:24:30.967639 1808 log.go:172] (0xc0004fcf00) (1) Data frame sent\nI0605 00:24:30.967656 1808 log.go:172] (0xc0005200b0) (0xc0004fcf00) Stream removed, broadcasting: 1\nI0605 00:24:30.967716 1808 log.go:172] (0xc0005200b0) Go away received\nI0605 00:24:30.968043 1808 log.go:172] (0xc0005200b0) (0xc0004fcf00) Stream removed, broadcasting: 1\nI0605 00:24:30.968067 1808 log.go:172] (0xc0005200b0) (0xc000bca000) Stream removed, broadcasting: 3\nI0605 00:24:30.968076 1808 log.go:172] (0xc0005200b0) (0xc000712be0) Stream removed, broadcasting: 5\n" Jun 5 00:24:30.972: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:24:30.972: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:24:30.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4909 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:24:31.191: INFO: stderr: "I0605 00:24:31.100978 1828 log.go:172] (0xc00003adc0) (0xc00061c460) Create stream\nI0605 00:24:31.101028 1828 log.go:172] (0xc00003adc0) (0xc00061c460) Stream added, broadcasting: 1\nI0605 00:24:31.103396 1828 log.go:172] (0xc00003adc0) Reply frame received for 1\nI0605 00:24:31.103451 1828 log.go:172] (0xc00003adc0) (0xc0006a2460) Create stream\nI0605 00:24:31.103468 1828 log.go:172] (0xc00003adc0) (0xc0006a2460) Stream added, broadcasting: 3\nI0605 00:24:31.104322 1828 log.go:172] (0xc00003adc0) Reply frame received for 3\nI0605 00:24:31.104366 1828 log.go:172] (0xc00003adc0) (0xc00044a0a0) Create stream\nI0605 00:24:31.104382 1828 log.go:172] (0xc00003adc0) (0xc00044a0a0) Stream added, broadcasting: 5\nI0605 00:24:31.105283 1828 log.go:172] (0xc00003adc0) Reply frame received for 5\nI0605 00:24:31.183437 1828 log.go:172] (0xc00003adc0) Data frame received for 5\nI0605 00:24:31.183481 1828 log.go:172] (0xc00044a0a0) (5) Data frame handling\nI0605 00:24:31.183503 1828 log.go:172] (0xc00044a0a0) (5) Data frame sent\nI0605 00:24:31.183521 1828 log.go:172] (0xc00003adc0) Data frame received for 5\nI0605 00:24:31.183537 1828 log.go:172] (0xc00044a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0605 00:24:31.183566 1828 log.go:172] (0xc00003adc0) Data frame received for 3\nI0605 00:24:31.183597 1828 log.go:172] (0xc0006a2460) (3) Data frame handling\nI0605 00:24:31.183617 1828 log.go:172] (0xc0006a2460) (3) Data frame sent\nI0605 00:24:31.183676 1828 log.go:172] (0xc00003adc0) Data frame received for 3\nI0605 00:24:31.183688 1828 log.go:172] (0xc0006a2460) (3) Data frame handling\nI0605 00:24:31.185437 1828 log.go:172] (0xc00003adc0) Data frame received for 1\nI0605 00:24:31.185451 1828 log.go:172] (0xc00061c460) (1) Data frame handling\nI0605 00:24:31.185463 1828 log.go:172] (0xc00061c460) (1) Data frame sent\nI0605 00:24:31.185475 1828 log.go:172] (0xc00003adc0) (0xc00061c460) Stream removed, broadcasting: 1\nI0605 00:24:31.185716 1828 log.go:172] (0xc00003adc0) Go away received\nI0605 00:24:31.185746 1828 log.go:172] (0xc00003adc0) (0xc00061c460) Stream removed, broadcasting: 1\nI0605 00:24:31.185757 1828 log.go:172] (0xc00003adc0) (0xc0006a2460) Stream removed, broadcasting: 3\nI0605 00:24:31.185765 1828 log.go:172] (0xc00003adc0) (0xc00044a0a0) Stream removed, broadcasting: 5\n" Jun 5 00:24:31.191: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:24:31.191: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:24:31.191: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 5 00:25:01.213: INFO: Deleting all statefulset in ns statefulset-4909 Jun 5 00:25:01.216: INFO: Scaling statefulset ss to 0 Jun 5 00:25:01.224: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:25:01.226: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:25:01.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4909" for this suite. • [SLOW TEST:92.460 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":167,"skipped":2533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:25:01.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:25:01.284: INFO: Creating ReplicaSet my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8 Jun 5 00:25:01.317: INFO: Pod name my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8: Found 0 pods out of 1 Jun 5 00:25:06.324: INFO: Pod name my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8: Found 1 pods out of 1 Jun 5 00:25:06.324: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8" is running Jun 5 00:25:06.329: INFO: Pod "my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8-hlhht" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:25:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:25:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:25:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-05 00:25:01 +0000 UTC Reason: Message:}]) Jun 5 00:25:06.330: INFO: Trying to dial the pod Jun 5 00:25:11.342: INFO: Controller my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8: Got expected result from replica 1 [my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8-hlhht]: "my-hostname-basic-c74f3f8e-3692-46ec-8d04-d66aaf907dc8-hlhht", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:25:11.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2377" for this suite. • [SLOW TEST:10.097 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":168,"skipped":2576,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:25:11.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:25:18.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1916" for this suite. • [SLOW TEST:7.087 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":169,"skipped":2585,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:25:18.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod Jun 5 00:25:18.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-8079 -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 5 00:25:18.610: INFO: stderr: "" Jun 5 00:25:18.610: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Jun 5 00:25:18.610: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 5 00:25:18.610: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8079" to be "running and ready, or succeeded" Jun 5 00:25:18.850: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 239.987609ms Jun 5 00:25:20.854: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244408831s Jun 5 00:25:22.858: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.248625652s Jun 5 00:25:22.858: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 5 00:25:22.858: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 5 00:25:22.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8079' Jun 5 00:25:22.986: INFO: stderr: "" Jun 5 00:25:22.986: INFO: stdout: "I0605 00:25:21.363218 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/n2qr 552\nI0605 00:25:21.563495 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/dvn 599\nI0605 00:25:21.763416 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/skk 458\nI0605 00:25:21.963395 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/646z 463\nI0605 00:25:22.163415 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/4cm 440\nI0605 00:25:22.363490 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/l2vn 455\nI0605 00:25:22.563402 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/hf5z 553\nI0605 00:25:22.763419 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/bd9l 267\nI0605 00:25:22.963356 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/j7pp 379\n" STEP: limiting log lines Jun 5 00:25:22.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8079 --tail=1' Jun 5 00:25:23.099: INFO: stderr: "" Jun 5 00:25:23.099: INFO: stdout: "I0605 00:25:22.963356 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/j7pp 379\n" Jun 5 00:25:23.099: INFO: got output "I0605 00:25:22.963356 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/j7pp 379\n" STEP: limiting log bytes Jun 5 00:25:23.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8079 --limit-bytes=1' Jun 5 00:25:23.204: INFO: stderr: "" Jun 5 00:25:23.204: INFO: stdout: "I" Jun 5 00:25:23.204: INFO: got output "I" STEP: exposing timestamps Jun 5 00:25:23.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8079 --tail=1 --timestamps' Jun 5 00:25:23.313: INFO: stderr: "" Jun 5 00:25:23.313: INFO: stdout: "2020-06-05T00:25:23.16352133Z I0605 00:25:23.163361 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/lts 558\n" Jun 5 00:25:23.313: INFO: got output "2020-06-05T00:25:23.16352133Z I0605 00:25:23.163361 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/lts 558\n" STEP: restricting to a time range Jun 5 00:25:25.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8079 --since=1s' Jun 5 00:25:25.934: INFO: stderr: "" Jun 5 00:25:25.934: INFO: stdout: "I0605 00:25:24.963407 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/hgs 306\nI0605 00:25:25.163388 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/m262 423\nI0605 00:25:25.363425 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/pqs 490\nI0605 00:25:25.563400 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/4kt7 535\nI0605 00:25:25.763425 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/wkp 590\n" Jun 5 00:25:25.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8079 --since=24h' Jun 5 00:25:26.071: INFO: stderr: "" Jun 5 00:25:26.071: INFO: stdout: "I0605 00:25:21.363218 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/n2qr 552\nI0605 00:25:21.563495 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/dvn 599\nI0605 00:25:21.763416 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/skk 458\nI0605 00:25:21.963395 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/646z 463\nI0605 00:25:22.163415 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/4cm 440\nI0605 00:25:22.363490 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/l2vn 455\nI0605 00:25:22.563402 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/hf5z 553\nI0605 00:25:22.763419 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/bd9l 267\nI0605 00:25:22.963356 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/j7pp 379\nI0605 00:25:23.163361 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/lts 558\nI0605 00:25:23.363434 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/mkw 311\nI0605 00:25:23.563455 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/lgp 288\nI0605 00:25:23.763426 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/784 242\nI0605 00:25:23.963396 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/km8 361\nI0605 00:25:24.163391 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/mwz 335\nI0605 00:25:24.363434 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/5nkg 451\nI0605 00:25:24.563378 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/mwn 596\nI0605 00:25:24.763398 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/74kq 326\nI0605 00:25:24.963407 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/hgs 306\nI0605 00:25:25.163388 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/m262 423\nI0605 00:25:25.363425 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/pqs 490\nI0605 00:25:25.563400 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/4kt7 535\nI0605 00:25:25.763425 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/wkp 590\nI0605 00:25:25.963352 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/zdc 532\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 Jun 5 00:25:26.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8079' Jun 5 00:25:35.229: INFO: stderr: "" Jun 5 00:25:35.229: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:25:35.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8079" for this suite. • [SLOW TEST:16.802 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":170,"skipped":2589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:25:35.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5195 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jun 5 00:25:35.364: INFO: Found 0 stateful pods, waiting for 3 Jun 5 00:25:45.389: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:25:45.389: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:25:45.389: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 5 00:25:55.370: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:25:55.370: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:25:55.370: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:25:55.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5195 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:25:55.629: INFO: stderr: "I0605 00:25:55.513815 2016 log.go:172] (0xc000acf6b0) (0xc000b523c0) Create stream\nI0605 00:25:55.513864 2016 log.go:172] (0xc000acf6b0) (0xc000b523c0) Stream added, broadcasting: 1\nI0605 00:25:55.519359 2016 log.go:172] (0xc000acf6b0) Reply frame received for 1\nI0605 00:25:55.519408 2016 log.go:172] (0xc000acf6b0) (0xc00053ec80) Create stream\nI0605 00:25:55.519422 2016 log.go:172] (0xc000acf6b0) (0xc00053ec80) Stream added, broadcasting: 3\nI0605 00:25:55.520489 2016 log.go:172] (0xc000acf6b0) Reply frame received for 3\nI0605 00:25:55.520529 2016 log.go:172] (0xc000acf6b0) (0xc0004bc6e0) Create stream\nI0605 00:25:55.520541 2016 log.go:172] (0xc000acf6b0) (0xc0004bc6e0) Stream added, broadcasting: 5\nI0605 00:25:55.522202 2016 log.go:172] (0xc000acf6b0) Reply frame received for 5\nI0605 00:25:55.592750 2016 log.go:172] (0xc000acf6b0) Data frame received for 5\nI0605 00:25:55.592773 2016 log.go:172] (0xc0004bc6e0) (5) Data frame handling\nI0605 00:25:55.592786 2016 log.go:172] (0xc0004bc6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:25:55.620458 2016 log.go:172] (0xc000acf6b0) Data frame received for 5\nI0605 00:25:55.620508 2016 log.go:172] (0xc0004bc6e0) (5) Data frame handling\nI0605 00:25:55.620663 2016 log.go:172] (0xc000acf6b0) Data frame received for 3\nI0605 00:25:55.620687 2016 log.go:172] (0xc00053ec80) (3) Data frame handling\nI0605 00:25:55.620861 2016 log.go:172] (0xc00053ec80) (3) Data frame sent\nI0605 00:25:55.620878 2016 log.go:172] (0xc000acf6b0) Data frame received for 3\nI0605 00:25:55.620888 2016 log.go:172] (0xc00053ec80) (3) Data frame handling\nI0605 00:25:55.622698 2016 log.go:172] (0xc000acf6b0) Data frame received for 1\nI0605 00:25:55.622716 2016 log.go:172] (0xc000b523c0) (1) Data frame handling\nI0605 00:25:55.622729 2016 log.go:172] (0xc000b523c0) (1) Data frame sent\nI0605 00:25:55.622747 2016 log.go:172] (0xc000acf6b0) (0xc000b523c0) Stream removed, broadcasting: 1\nI0605 00:25:55.622760 2016 log.go:172] (0xc000acf6b0) Go away received\nI0605 00:25:55.623194 2016 log.go:172] (0xc000acf6b0) (0xc000b523c0) Stream removed, broadcasting: 1\nI0605 00:25:55.623223 2016 log.go:172] (0xc000acf6b0) (0xc00053ec80) Stream removed, broadcasting: 3\nI0605 00:25:55.623236 2016 log.go:172] (0xc000acf6b0) (0xc0004bc6e0) Stream removed, broadcasting: 5\n" Jun 5 00:25:55.629: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:25:55.629: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 5 00:26:05.687: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 5 00:26:15.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5195 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:26:15.911: INFO: stderr: "I0605 00:26:15.833707 2036 log.go:172] (0xc00066ec60) (0xc00047a0a0) Create stream\nI0605 00:26:15.833763 2036 log.go:172] (0xc00066ec60) (0xc00047a0a0) Stream added, broadcasting: 1\nI0605 00:26:15.836057 2036 log.go:172] (0xc00066ec60) Reply frame received for 1\nI0605 00:26:15.836092 2036 log.go:172] (0xc00066ec60) (0xc00025a000) Create stream\nI0605 00:26:15.836104 2036 log.go:172] (0xc00066ec60) (0xc00025a000) Stream added, broadcasting: 3\nI0605 00:26:15.837475 2036 log.go:172] (0xc00066ec60) Reply frame received for 3\nI0605 00:26:15.837548 2036 log.go:172] (0xc00066ec60) (0xc00047a820) Create stream\nI0605 00:26:15.837575 2036 log.go:172] (0xc00066ec60) (0xc00047a820) Stream added, broadcasting: 5\nI0605 00:26:15.838648 2036 log.go:172] (0xc00066ec60) Reply frame received for 5\nI0605 00:26:15.902630 2036 log.go:172] (0xc00066ec60) Data frame received for 3\nI0605 00:26:15.902804 2036 log.go:172] (0xc00025a000) (3) Data frame handling\nI0605 00:26:15.902839 2036 log.go:172] (0xc00025a000) (3) Data frame sent\nI0605 00:26:15.902859 2036 log.go:172] (0xc00066ec60) Data frame received for 3\nI0605 00:26:15.902874 2036 log.go:172] (0xc00025a000) (3) Data frame handling\nI0605 00:26:15.902896 2036 log.go:172] (0xc00066ec60) Data frame received for 5\nI0605 00:26:15.902915 2036 log.go:172] (0xc00047a820) (5) Data frame handling\nI0605 00:26:15.902934 2036 log.go:172] (0xc00047a820) (5) Data frame sent\nI0605 00:26:15.902950 2036 log.go:172] (0xc00066ec60) Data frame received for 5\nI0605 00:26:15.902970 2036 log.go:172] (0xc00047a820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0605 00:26:15.904768 2036 log.go:172] (0xc00066ec60) Data frame received for 1\nI0605 00:26:15.904802 2036 log.go:172] (0xc00047a0a0) (1) Data frame handling\nI0605 00:26:15.904820 2036 log.go:172] (0xc00047a0a0) (1) Data frame sent\nI0605 00:26:15.904841 2036 log.go:172] (0xc00066ec60) (0xc00047a0a0) Stream removed, broadcasting: 1\nI0605 00:26:15.904865 2036 log.go:172] (0xc00066ec60) Go away received\nI0605 00:26:15.905423 2036 log.go:172] (0xc00066ec60) (0xc00047a0a0) Stream removed, broadcasting: 1\nI0605 00:26:15.905449 2036 log.go:172] (0xc00066ec60) (0xc00025a000) Stream removed, broadcasting: 3\nI0605 00:26:15.905460 2036 log.go:172] (0xc00066ec60) (0xc00047a820) Stream removed, broadcasting: 5\n" Jun 5 00:26:15.911: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:26:15.911: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:26:45.931: INFO: Waiting for StatefulSet statefulset-5195/ss2 to complete update STEP: Rolling back to a previous revision Jun 5 00:26:55.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5195 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:26:56.249: INFO: stderr: "I0605 00:26:56.089388 2058 log.go:172] (0xc000444dc0) (0xc000aa0500) Create stream\nI0605 00:26:56.089473 2058 log.go:172] (0xc000444dc0) (0xc000aa0500) Stream added, broadcasting: 1\nI0605 00:26:56.094386 2058 log.go:172] (0xc000444dc0) Reply frame received for 1\nI0605 00:26:56.094442 2058 log.go:172] (0xc000444dc0) (0xc0006465a0) Create stream\nI0605 00:26:56.094457 2058 log.go:172] (0xc000444dc0) (0xc0006465a0) Stream added, broadcasting: 3\nI0605 00:26:56.095365 2058 log.go:172] (0xc000444dc0) Reply frame received for 3\nI0605 00:26:56.095397 2058 log.go:172] (0xc000444dc0) (0xc000524280) Create stream\nI0605 00:26:56.095408 2058 log.go:172] (0xc000444dc0) (0xc000524280) Stream added, broadcasting: 5\nI0605 00:26:56.096340 2058 log.go:172] (0xc000444dc0) Reply frame received for 5\nI0605 00:26:56.184899 2058 log.go:172] (0xc000444dc0) Data frame received for 5\nI0605 00:26:56.184932 2058 log.go:172] (0xc000524280) (5) Data frame handling\nI0605 00:26:56.184954 2058 log.go:172] (0xc000524280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:26:56.240726 2058 log.go:172] (0xc000444dc0) Data frame received for 3\nI0605 00:26:56.240828 2058 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0605 00:26:56.240907 2058 log.go:172] (0xc0006465a0) (3) Data frame sent\nI0605 00:26:56.241501 2058 log.go:172] (0xc000444dc0) Data frame received for 5\nI0605 00:26:56.241552 2058 log.go:172] (0xc000524280) (5) Data frame handling\nI0605 00:26:56.241577 2058 log.go:172] (0xc000444dc0) Data frame received for 3\nI0605 00:26:56.241711 2058 log.go:172] (0xc0006465a0) (3) Data frame handling\nI0605 00:26:56.244074 2058 log.go:172] (0xc000444dc0) Data frame received for 1\nI0605 00:26:56.244119 2058 log.go:172] (0xc000aa0500) (1) Data frame handling\nI0605 00:26:56.244149 2058 log.go:172] (0xc000aa0500) (1) Data frame sent\nI0605 00:26:56.244182 2058 log.go:172] (0xc000444dc0) (0xc000aa0500) Stream removed, broadcasting: 1\nI0605 00:26:56.244332 2058 log.go:172] (0xc000444dc0) Go away received\nI0605 00:26:56.244703 2058 log.go:172] (0xc000444dc0) (0xc000aa0500) Stream removed, broadcasting: 1\nI0605 00:26:56.244742 2058 log.go:172] (0xc000444dc0) (0xc0006465a0) Stream removed, broadcasting: 3\nI0605 00:26:56.244768 2058 log.go:172] (0xc000444dc0) (0xc000524280) Stream removed, broadcasting: 5\n" Jun 5 00:26:56.249: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:26:56.249: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:27:06.283: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 5 00:27:16.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5195 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:27:16.538: INFO: stderr: "I0605 00:27:16.442267 2079 log.go:172] (0xc00003a420) (0xc000302c80) Create stream\nI0605 00:27:16.442325 2079 log.go:172] (0xc00003a420) (0xc000302c80) Stream added, broadcasting: 1\nI0605 00:27:16.444678 2079 log.go:172] (0xc00003a420) Reply frame received for 1\nI0605 00:27:16.444729 2079 log.go:172] (0xc00003a420) (0xc0000dde00) Create stream\nI0605 00:27:16.444743 2079 log.go:172] (0xc00003a420) (0xc0000dde00) Stream added, broadcasting: 3\nI0605 00:27:16.445874 2079 log.go:172] (0xc00003a420) Reply frame received for 3\nI0605 00:27:16.446030 2079 log.go:172] (0xc00003a420) (0xc000139ea0) Create stream\nI0605 00:27:16.446044 2079 log.go:172] (0xc00003a420) (0xc000139ea0) Stream added, broadcasting: 5\nI0605 00:27:16.446961 2079 log.go:172] (0xc00003a420) Reply frame received for 5\nI0605 00:27:16.528585 2079 log.go:172] (0xc00003a420) Data frame received for 5\nI0605 00:27:16.528636 2079 log.go:172] (0xc000139ea0) (5) Data frame handling\nI0605 00:27:16.528655 2079 log.go:172] (0xc000139ea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0605 00:27:16.528686 2079 log.go:172] (0xc00003a420) Data frame received for 5\nI0605 00:27:16.528709 2079 log.go:172] (0xc000139ea0) (5) Data frame handling\nI0605 00:27:16.528743 2079 log.go:172] (0xc00003a420) Data frame received for 3\nI0605 00:27:16.528759 2079 log.go:172] (0xc0000dde00) (3) Data frame handling\nI0605 00:27:16.528784 2079 log.go:172] (0xc0000dde00) (3) Data frame sent\nI0605 00:27:16.528801 2079 log.go:172] (0xc00003a420) Data frame received for 3\nI0605 00:27:16.528814 2079 log.go:172] (0xc0000dde00) (3) Data frame handling\nI0605 00:27:16.530775 2079 log.go:172] (0xc00003a420) Data frame received for 1\nI0605 00:27:16.530802 2079 log.go:172] (0xc000302c80) (1) Data frame handling\nI0605 00:27:16.530828 2079 log.go:172] (0xc000302c80) (1) Data frame sent\nI0605 00:27:16.530852 2079 log.go:172] (0xc00003a420) (0xc000302c80) Stream removed, broadcasting: 1\nI0605 00:27:16.530868 2079 log.go:172] (0xc00003a420) Go away received\nI0605 00:27:16.531378 2079 log.go:172] (0xc00003a420) (0xc000302c80) Stream removed, broadcasting: 1\nI0605 00:27:16.531402 2079 log.go:172] (0xc00003a420) (0xc0000dde00) Stream removed, broadcasting: 3\nI0605 00:27:16.531415 2079 log.go:172] (0xc00003a420) (0xc000139ea0) Stream removed, broadcasting: 5\n" Jun 5 00:27:16.538: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:27:16.538: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:27:26.562: INFO: Waiting for StatefulSet statefulset-5195/ss2 to complete update Jun 5 00:27:26.562: INFO: Waiting for Pod statefulset-5195/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 5 00:27:26.562: INFO: Waiting for Pod statefulset-5195/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 5 00:27:36.626: INFO: Waiting for StatefulSet statefulset-5195/ss2 to complete update Jun 5 00:27:36.626: INFO: Waiting for Pod statefulset-5195/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 5 00:27:46.621: INFO: Waiting for StatefulSet statefulset-5195/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 5 00:27:56.571: INFO: Deleting all statefulset in ns statefulset-5195 Jun 5 00:27:56.574: INFO: Scaling statefulset ss2 to 0 Jun 5 00:28:26.616: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:28:26.619: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:28:26.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5195" for this suite. • [SLOW TEST:171.402 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":171,"skipped":2683,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:28:26.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 5 00:28:26.700: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Jun 5 00:28:27.243: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 5 00:28:29.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:28:31.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913707, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:28:34.201: INFO: Waited 623.80827ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:28:34.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9774" for this suite. • [SLOW TEST:8.247 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":172,"skipped":2689,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:28:34.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:28:34.965: INFO: Creating deployment "test-recreate-deployment" Jun 5 00:28:35.037: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 5 00:28:35.223: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 5 00:28:37.229: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 5 00:28:37.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913715, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913715, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913715, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913715, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:28:39.235: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 5 00:28:39.242: INFO: Updating deployment test-recreate-deployment Jun 5 00:28:39.242: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 5 00:28:39.869: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5475 /apis/apps/v1/namespaces/deployment-5475/deployments/test-recreate-deployment 6f13497f-0889-4a14-9393-6f83252550d6 10339567 2 2020-06-05 00:28:34 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-05 00:28:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-05 00:28:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052b0a28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-05 00:28:39 +0000 UTC,LastTransitionTime:2020-06-05 00:28:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-06-05 00:28:39 +0000 UTC,LastTransitionTime:2020-06-05 00:28:35 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 5 00:28:39.905: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-5475 /apis/apps/v1/namespaces/deployment-5475/replicasets/test-recreate-deployment-d5667d9c7 79699c2f-9d83-4c0a-be44-aa10ae3ba54c 10339565 1 2020-06-05 00:28:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6f13497f-0889-4a14-9393-6f83252550d6 0xc00338b2c0 0xc00338b2c1}] [] [{kube-controller-manager Update apps/v1 2020-06-05 00:28:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f13497f-0889-4a14-9393-6f83252550d6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00338b338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 5 00:28:39.905: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 5 00:28:39.905: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-5475 /apis/apps/v1/namespaces/deployment-5475/replicasets/test-recreate-deployment-6d65b9f6d8 e77f67ea-9be7-427f-86ac-5028318076df 10339556 2 2020-06-05 00:28:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6f13497f-0889-4a14-9393-6f83252550d6 0xc00338b1c7 0xc00338b1c8}] [] [{kube-controller-manager Update apps/v1 2020-06-05 00:28:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f13497f-0889-4a14-9393-6f83252550d6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00338b258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 5 00:28:39.919: INFO: Pod "test-recreate-deployment-d5667d9c7-jbm6s" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-jbm6s test-recreate-deployment-d5667d9c7- deployment-5475 /api/v1/namespaces/deployment-5475/pods/test-recreate-deployment-d5667d9c7-jbm6s 8a6c73b6-6971-46e3-85a4-14824e0e40b8 10339570 0 2020-06-05 00:28:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 79699c2f-9d83-4c0a-be44-aa10ae3ba54c 0xc00338b830 0xc00338b831}] [] [{kube-controller-manager Update v1 2020-06-05 00:28:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79699c2f-9d83-4c0a-be44-aa10ae3ba54c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 00:28:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4qd62,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4qd62,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4qd62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:28:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:28:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:28:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:28:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-05 00:28:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:28:39.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5475" for this suite. • [SLOW TEST:5.132 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":173,"skipped":2703,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:28:40.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:28:40.310: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01" in namespace "downward-api-4734" to be "Succeeded or Failed" Jun 5 00:28:40.624: INFO: Pod "downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01": Phase="Pending", Reason="", readiness=false. Elapsed: 313.754132ms Jun 5 00:28:42.628: INFO: Pod "downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317978487s Jun 5 00:28:44.633: INFO: Pod "downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.322968209s STEP: Saw pod success Jun 5 00:28:44.633: INFO: Pod "downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01" satisfied condition "Succeeded or Failed" Jun 5 00:28:44.636: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01 container client-container: STEP: delete the pod Jun 5 00:28:44.678: INFO: Waiting for pod downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01 to disappear Jun 5 00:28:44.683: INFO: Pod downwardapi-volume-2719439d-fc84-4e5b-b4aa-fad72d7b8a01 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:28:44.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4734" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2709,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:28:44.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jun 5 00:28:48.767: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9992 PodName:var-expansion-70dc9f1b-89aa-4daa-9594-b4bb2aa978fc ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:28:48.767: INFO: >>> kubeConfig: /root/.kube/config I0605 00:28:48.827808 7 log.go:172] (0xc0027673f0) (0xc000c24fa0) Create stream I0605 00:28:48.827831 7 log.go:172] (0xc0027673f0) (0xc000c24fa0) Stream added, broadcasting: 1 I0605 00:28:48.829997 7 log.go:172] (0xc0027673f0) Reply frame received for 1 I0605 00:28:48.830040 7 log.go:172] (0xc0027673f0) (0xc000cab400) Create stream I0605 00:28:48.830058 7 log.go:172] (0xc0027673f0) (0xc000cab400) Stream added, broadcasting: 3 I0605 00:28:48.831411 7 log.go:172] (0xc0027673f0) Reply frame received for 3 I0605 00:28:48.831474 7 log.go:172] (0xc0027673f0) (0xc0013f5180) Create stream I0605 00:28:48.831493 7 log.go:172] (0xc0027673f0) (0xc0013f5180) Stream added, broadcasting: 5 I0605 00:28:48.832756 7 log.go:172] (0xc0027673f0) Reply frame received for 5 I0605 00:28:48.927262 7 log.go:172] (0xc0027673f0) Data frame received for 5 I0605 00:28:48.927297 7 log.go:172] (0xc0013f5180) (5) Data frame handling I0605 00:28:48.927314 7 log.go:172] (0xc0027673f0) Data frame received for 3 I0605 00:28:48.927322 7 log.go:172] (0xc000cab400) (3) Data frame handling I0605 00:28:48.928499 7 log.go:172] (0xc0027673f0) Data frame received for 1 I0605 00:28:48.928517 7 log.go:172] (0xc000c24fa0) (1) Data frame handling I0605 00:28:48.928525 7 log.go:172] (0xc000c24fa0) (1) Data frame sent I0605 00:28:48.928543 7 log.go:172] (0xc0027673f0) (0xc000c24fa0) Stream removed, broadcasting: 1 I0605 00:28:48.928571 7 log.go:172] (0xc0027673f0) Go away received I0605 00:28:48.928681 7 log.go:172] (0xc0027673f0) (0xc000c24fa0) Stream removed, broadcasting: 1 I0605 00:28:48.928699 7 log.go:172] (0xc0027673f0) (0xc000cab400) Stream removed, broadcasting: 3 I0605 00:28:48.928722 7 log.go:172] (0xc0027673f0) (0xc0013f5180) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jun 5 00:28:48.931: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9992 PodName:var-expansion-70dc9f1b-89aa-4daa-9594-b4bb2aa978fc ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:28:48.931: INFO: >>> kubeConfig: /root/.kube/config I0605 00:28:48.962412 7 log.go:172] (0xc002950a50) (0xc00126a1e0) Create stream I0605 00:28:48.962436 7 log.go:172] (0xc002950a50) (0xc00126a1e0) Stream added, broadcasting: 1 I0605 00:28:48.964563 7 log.go:172] (0xc002950a50) Reply frame received for 1 I0605 00:28:48.964594 7 log.go:172] (0xc002950a50) (0xc0013f5220) Create stream I0605 00:28:48.964604 7 log.go:172] (0xc002950a50) (0xc0013f5220) Stream added, broadcasting: 3 I0605 00:28:48.965876 7 log.go:172] (0xc002950a50) Reply frame received for 3 I0605 00:28:48.965923 7 log.go:172] (0xc002950a50) (0xc0020b6140) Create stream I0605 00:28:48.965939 7 log.go:172] (0xc002950a50) (0xc0020b6140) Stream added, broadcasting: 5 I0605 00:28:48.966897 7 log.go:172] (0xc002950a50) Reply frame received for 5 I0605 00:28:49.050423 7 log.go:172] (0xc002950a50) Data frame received for 5 I0605 00:28:49.050458 7 log.go:172] (0xc0020b6140) (5) Data frame handling I0605 00:28:49.050477 7 log.go:172] (0xc002950a50) Data frame received for 3 I0605 00:28:49.050487 7 log.go:172] (0xc0013f5220) (3) Data frame handling I0605 00:28:49.051778 7 log.go:172] (0xc002950a50) Data frame received for 1 I0605 00:28:49.051818 7 log.go:172] (0xc00126a1e0) (1) Data frame handling I0605 00:28:49.051876 7 log.go:172] (0xc00126a1e0) (1) Data frame sent I0605 00:28:49.051907 7 log.go:172] (0xc002950a50) (0xc00126a1e0) Stream removed, broadcasting: 1 I0605 00:28:49.051981 7 log.go:172] (0xc002950a50) Go away received I0605 00:28:49.052040 7 log.go:172] (0xc002950a50) (0xc00126a1e0) Stream removed, broadcasting: 1 I0605 00:28:49.052095 7 log.go:172] (0xc002950a50) (0xc0013f5220) Stream removed, broadcasting: 3 I0605 00:28:49.052121 7 log.go:172] (0xc002950a50) (0xc0020b6140) Stream removed, broadcasting: 5 STEP: updating the annotation value Jun 5 00:28:49.563: INFO: Successfully updated pod "var-expansion-70dc9f1b-89aa-4daa-9594-b4bb2aa978fc" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jun 5 00:28:49.597: INFO: Deleting pod "var-expansion-70dc9f1b-89aa-4daa-9594-b4bb2aa978fc" in namespace "var-expansion-9992" Jun 5 00:28:49.601: INFO: Wait up to 5m0s for pod "var-expansion-70dc9f1b-89aa-4daa-9594-b4bb2aa978fc" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:29:25.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9992" for this suite. • [SLOW TEST:40.962 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":175,"skipped":2722,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:29:25.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 5 00:29:25.713: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 5 00:29:25.730: INFO: Waiting for terminating namespaces to be deleted... Jun 5 00:29:25.733: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 5 00:29:25.738: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.738: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 5 00:29:25.738: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.738: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 5 00:29:25.738: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.738: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 00:29:25.738: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.739: INFO: Container kube-proxy ready: true, restart count 0 Jun 5 00:29:25.739: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 5 00:29:25.744: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.744: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 5 00:29:25.744: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.744: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 5 00:29:25.744: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.744: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 00:29:25.744: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 00:29:25.744: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16157e5966def3ad], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16157e596904e377], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:29:26.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8296" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":176,"skipped":2735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:29:26.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3036 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3036 STEP: Deleting pre-stop pod Jun 5 00:29:39.972: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:29:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3036" for this suite. • [SLOW TEST:13.209 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":177,"skipped":2801,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:29:40.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 5 00:29:40.588: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4989 /api/v1/namespaces/watch-4989/configmaps/e2e-watch-test-label-changed 9ac1df6c-c082-4f2f-90d1-f0901d8ac56a 10339888 0 2020-06-05 00:29:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-05 00:29:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:29:40.588: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4989 /api/v1/namespaces/watch-4989/configmaps/e2e-watch-test-label-changed 9ac1df6c-c082-4f2f-90d1-f0901d8ac56a 10339889 0 2020-06-05 00:29:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-05 00:29:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:29:40.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4989 /api/v1/namespaces/watch-4989/configmaps/e2e-watch-test-label-changed 9ac1df6c-c082-4f2f-90d1-f0901d8ac56a 10339890 0 2020-06-05 00:29:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-05 00:29:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 5 00:29:50.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4989 /api/v1/namespaces/watch-4989/configmaps/e2e-watch-test-label-changed 9ac1df6c-c082-4f2f-90d1-f0901d8ac56a 10339937 0 2020-06-05 00:29:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-05 00:29:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:29:50.628: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4989 /api/v1/namespaces/watch-4989/configmaps/e2e-watch-test-label-changed 9ac1df6c-c082-4f2f-90d1-f0901d8ac56a 10339938 0 2020-06-05 00:29:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-05 00:29:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:29:50.629: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4989 /api/v1/namespaces/watch-4989/configmaps/e2e-watch-test-label-changed 9ac1df6c-c082-4f2f-90d1-f0901d8ac56a 10339939 0 2020-06-05 00:29:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-05 00:29:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:29:50.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4989" for this suite. • [SLOW TEST:10.624 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":178,"skipped":2807,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:29:50.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-461 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-461 STEP: Creating statefulset with conflicting port in namespace statefulset-461 STEP: Waiting until pod test-pod will start running in namespace statefulset-461 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-461 Jun 5 00:29:54.855: INFO: Observed stateful pod in namespace: statefulset-461, name: ss-0, uid: 36c96c75-056f-4c1c-ba25-abe5b8bb23b2, status phase: Pending. Waiting for statefulset controller to delete. Jun 5 00:29:55.386: INFO: Observed stateful pod in namespace: statefulset-461, name: ss-0, uid: 36c96c75-056f-4c1c-ba25-abe5b8bb23b2, status phase: Failed. Waiting for statefulset controller to delete. Jun 5 00:29:55.395: INFO: Observed stateful pod in namespace: statefulset-461, name: ss-0, uid: 36c96c75-056f-4c1c-ba25-abe5b8bb23b2, status phase: Failed. Waiting for statefulset controller to delete. Jun 5 00:29:55.413: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-461 STEP: Removing pod with conflicting port in namespace statefulset-461 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-461 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 5 00:29:59.495: INFO: Deleting all statefulset in ns statefulset-461 Jun 5 00:29:59.498: INFO: Scaling statefulset ss to 0 Jun 5 00:30:09.519: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:30:09.522: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:30:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-461" for this suite. • [SLOW TEST:18.907 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":179,"skipped":2824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:30:09.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 5 00:30:10.242: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 5 00:30:12.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913810, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913810, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913810, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913810, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:30:15.283: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:30:15.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:30:16.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9615" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.036 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":180,"skipped":2874,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:30:16.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8950 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8950 I0605 00:30:16.767743 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8950, replica count: 2 I0605 00:30:19.818102 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:30:22.818363 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:30:22.818: INFO: Creating new exec pod Jun 5 00:30:27.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8950 execpod25ttm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 5 00:30:28.056: INFO: stderr: "I0605 00:30:27.984566 2099 log.go:172] (0xc000a471e0) (0xc0009cc500) Create stream\nI0605 00:30:27.984631 2099 log.go:172] (0xc000a471e0) (0xc0009cc500) Stream added, broadcasting: 1\nI0605 00:30:27.990253 2099 log.go:172] (0xc000a471e0) Reply frame received for 1\nI0605 00:30:27.990309 2099 log.go:172] (0xc000a471e0) (0xc00054a460) Create stream\nI0605 00:30:27.990325 2099 log.go:172] (0xc000a471e0) (0xc00054a460) Stream added, broadcasting: 3\nI0605 00:30:27.991213 2099 log.go:172] (0xc000a471e0) Reply frame received for 3\nI0605 00:30:27.991261 2099 log.go:172] (0xc000a471e0) (0xc000534140) Create stream\nI0605 00:30:27.991281 2099 log.go:172] (0xc000a471e0) (0xc000534140) Stream added, broadcasting: 5\nI0605 00:30:27.992098 2099 log.go:172] (0xc000a471e0) Reply frame received for 5\nI0605 00:30:28.047524 2099 log.go:172] (0xc000a471e0) Data frame received for 5\nI0605 00:30:28.047552 2099 log.go:172] (0xc000534140) (5) Data frame handling\nI0605 00:30:28.047573 2099 log.go:172] (0xc000534140) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0605 00:30:28.048430 2099 log.go:172] (0xc000a471e0) Data frame received for 5\nI0605 00:30:28.048444 2099 log.go:172] (0xc000534140) (5) Data frame handling\nI0605 00:30:28.048459 2099 log.go:172] (0xc000534140) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0605 00:30:28.048889 2099 log.go:172] (0xc000a471e0) Data frame received for 3\nI0605 00:30:28.048913 2099 log.go:172] (0xc00054a460) (3) Data frame handling\nI0605 00:30:28.049045 2099 log.go:172] (0xc000a471e0) Data frame received for 5\nI0605 00:30:28.049072 2099 log.go:172] (0xc000534140) (5) Data frame handling\nI0605 00:30:28.050949 2099 log.go:172] (0xc000a471e0) Data frame received for 1\nI0605 00:30:28.050993 2099 log.go:172] (0xc0009cc500) (1) Data frame handling\nI0605 00:30:28.051021 2099 log.go:172] (0xc0009cc500) (1) Data frame sent\nI0605 00:30:28.051044 2099 log.go:172] (0xc000a471e0) (0xc0009cc500) Stream removed, broadcasting: 1\nI0605 00:30:28.051066 2099 log.go:172] (0xc000a471e0) Go away received\nI0605 00:30:28.051583 2099 log.go:172] (0xc000a471e0) (0xc0009cc500) Stream removed, broadcasting: 1\nI0605 00:30:28.051604 2099 log.go:172] (0xc000a471e0) (0xc00054a460) Stream removed, broadcasting: 3\nI0605 00:30:28.051617 2099 log.go:172] (0xc000a471e0) (0xc000534140) Stream removed, broadcasting: 5\n" Jun 5 00:30:28.056: INFO: stdout: "" Jun 5 00:30:28.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8950 execpod25ttm -- /bin/sh -x -c nc -zv -t -w 2 10.109.201.123 80' Jun 5 00:30:28.285: INFO: stderr: "I0605 00:30:28.194270 2121 log.go:172] (0xc000bdf340) (0xc000b6e500) Create stream\nI0605 00:30:28.194334 2121 log.go:172] (0xc000bdf340) (0xc000b6e500) Stream added, broadcasting: 1\nI0605 00:30:28.199665 2121 log.go:172] (0xc000bdf340) Reply frame received for 1\nI0605 00:30:28.199693 2121 log.go:172] (0xc000bdf340) (0xc0006c4500) Create stream\nI0605 00:30:28.199701 2121 log.go:172] (0xc000bdf340) (0xc0006c4500) Stream added, broadcasting: 3\nI0605 00:30:28.200600 2121 log.go:172] (0xc000bdf340) Reply frame received for 3\nI0605 00:30:28.200651 2121 log.go:172] (0xc000bdf340) (0xc0005bc1e0) Create stream\nI0605 00:30:28.200668 2121 log.go:172] (0xc000bdf340) (0xc0005bc1e0) Stream added, broadcasting: 5\nI0605 00:30:28.201850 2121 log.go:172] (0xc000bdf340) Reply frame received for 5\nI0605 00:30:28.275877 2121 log.go:172] (0xc000bdf340) Data frame received for 3\nI0605 00:30:28.275909 2121 log.go:172] (0xc0006c4500) (3) Data frame handling\nI0605 00:30:28.275937 2121 log.go:172] (0xc000bdf340) Data frame received for 5\nI0605 00:30:28.275951 2121 log.go:172] (0xc0005bc1e0) (5) Data frame handling\nI0605 00:30:28.275967 2121 log.go:172] (0xc0005bc1e0) (5) Data frame sent\nI0605 00:30:28.275973 2121 log.go:172] (0xc000bdf340) Data frame received for 5\nI0605 00:30:28.275982 2121 log.go:172] (0xc0005bc1e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.201.123 80\nConnection to 10.109.201.123 80 port [tcp/http] succeeded!\nI0605 00:30:28.278501 2121 log.go:172] (0xc000bdf340) Data frame received for 1\nI0605 00:30:28.278529 2121 log.go:172] (0xc000b6e500) (1) Data frame handling\nI0605 00:30:28.278546 2121 log.go:172] (0xc000b6e500) (1) Data frame sent\nI0605 00:30:28.278564 2121 log.go:172] (0xc000bdf340) (0xc000b6e500) Stream removed, broadcasting: 1\nI0605 00:30:28.278585 2121 log.go:172] (0xc000bdf340) Go away received\nI0605 00:30:28.279072 2121 log.go:172] (0xc000bdf340) (0xc000b6e500) Stream removed, broadcasting: 1\nI0605 00:30:28.279114 2121 log.go:172] (0xc000bdf340) (0xc0006c4500) Stream removed, broadcasting: 3\nI0605 00:30:28.279136 2121 log.go:172] (0xc000bdf340) (0xc0005bc1e0) Stream removed, broadcasting: 5\n" Jun 5 00:30:28.285: INFO: stdout: "" Jun 5 00:30:28.285: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:30:28.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8950" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.816 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":181,"skipped":2881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:30:28.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 5 00:30:28.439: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 5 00:30:28.480: INFO: Waiting for terminating namespaces to be deleted... Jun 5 00:30:28.484: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 5 00:30:28.489: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.489: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 5 00:30:28.489: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.489: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 5 00:30:28.489: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.489: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 00:30:28.489: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.489: INFO: Container kube-proxy ready: true, restart count 0 Jun 5 00:30:28.489: INFO: execpod25ttm from services-8950 started at 2020-06-05 00:30:22 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.489: INFO: Container agnhost-pause ready: true, restart count 0 Jun 5 00:30:28.489: INFO: externalname-service-vlrnd from services-8950 started at 2020-06-05 00:30:17 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.489: INFO: Container externalname-service ready: true, restart count 0 Jun 5 00:30:28.489: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 5 00:30:28.494: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.494: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 5 00:30:28.494: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.494: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 5 00:30:28.494: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.494: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 00:30:28.494: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.494: INFO: Container kube-proxy ready: true, restart count 0 Jun 5 00:30:28.494: INFO: externalname-service-kw7l7 from services-8950 started at 2020-06-05 00:30:16 +0000 UTC (1 container statuses recorded) Jun 5 00:30:28.494: INFO: Container externalname-service ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Jun 5 00:30:28.572: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker Jun 5 00:30:28.572: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 Jun 5 00:30:28.572: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker Jun 5 00:30:28.572: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 Jun 5 00:30:28.572: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker Jun 5 00:30:28.572: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 Jun 5 00:30:28.572: INFO: Pod execpod25ttm requesting resource cpu=0m on Node latest-worker Jun 5 00:30:28.572: INFO: Pod externalname-service-kw7l7 requesting resource cpu=0m on Node latest-worker2 Jun 5 00:30:28.572: INFO: Pod externalname-service-vlrnd requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Jun 5 00:30:28.572: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Jun 5 00:30:28.602: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-091b4cd6-3158-4d45-8328-a03bd040c828.16157e6809324d2f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5286/filler-pod-091b4cd6-3158-4d45-8328-a03bd040c828 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-091b4cd6-3158-4d45-8328-a03bd040c828.16157e68a6b73868], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-091b4cd6-3158-4d45-8328-a03bd040c828.16157e68d55a1d97], Reason = [Created], Message = [Created container filler-pod-091b4cd6-3158-4d45-8328-a03bd040c828] STEP: Considering event: Type = [Normal], Name = [filler-pod-091b4cd6-3158-4d45-8328-a03bd040c828.16157e68e53e2f07], Reason = [Started], Message = [Started container filler-pod-091b4cd6-3158-4d45-8328-a03bd040c828] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0c65c3f-cdd2-40f2-83bf-0aabdd706b4d.16157e68074fe96c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5286/filler-pod-a0c65c3f-cdd2-40f2-83bf-0aabdd706b4d to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0c65c3f-cdd2-40f2-83bf-0aabdd706b4d.16157e685637171d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0c65c3f-cdd2-40f2-83bf-0aabdd706b4d.16157e68a4c99507], Reason = [Created], Message = [Created container filler-pod-a0c65c3f-cdd2-40f2-83bf-0aabdd706b4d] STEP: Considering event: Type = [Normal], Name = [filler-pod-a0c65c3f-cdd2-40f2-83bf-0aabdd706b4d.16157e68c16d4281], Reason = [Started], Message = [Started container filler-pod-a0c65c3f-cdd2-40f2-83bf-0aabdd706b4d] STEP: Considering event: Type = [Warning], Name = [additional-pod.16157e6970a5e60d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16157e69736012d9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:30:36.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5286" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.665 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":182,"skipped":2955,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:30:36.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4411, will wait for the garbage collector to delete the pods Jun 5 00:30:42.276: INFO: Deleting Job.batch foo took: 5.866041ms Jun 5 00:30:42.376: INFO: Terminating Job.batch foo pods took: 100.208059ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:31:15.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4411" for this suite. • [SLOW TEST:39.525 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":183,"skipped":2970,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:31:15.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:31:15.688: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:31:16.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6159" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":184,"skipped":2976,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:31:16.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-87d64ad8-fa24-4169-86ee-cead05583750 STEP: Creating a pod to test consume secrets Jun 5 00:31:16.953: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939" in namespace "projected-8090" to be "Succeeded or Failed" Jun 5 00:31:16.967: INFO: Pod "pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939": Phase="Pending", Reason="", readiness=false. Elapsed: 13.965303ms Jun 5 00:31:18.974: INFO: Pod "pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020606625s Jun 5 00:31:20.979: INFO: Pod "pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025507882s STEP: Saw pod success Jun 5 00:31:20.979: INFO: Pod "pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939" satisfied condition "Succeeded or Failed" Jun 5 00:31:20.981: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939 container projected-secret-volume-test: STEP: delete the pod Jun 5 00:31:21.027: INFO: Waiting for pod pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939 to disappear Jun 5 00:31:21.032: INFO: Pod pod-projected-secrets-f3bd0364-8229-4e4c-82f4-d9362879d939 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:31:21.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8090" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":2992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:31:21.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 5 00:31:21.152: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:31:28.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1847" for this suite. • [SLOW TEST:7.512 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":186,"skipped":3038,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:31:28.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:31:33.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2762" for this suite. • [SLOW TEST:5.037 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":187,"skipped":3047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:31:33.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3031 STEP: creating service affinity-nodeport in namespace services-3031 STEP: creating replication controller affinity-nodeport in namespace services-3031 I0605 00:31:33.735823 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-3031, replica count: 3 I0605 00:31:36.786300 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:31:39.786581 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:31:39.796: INFO: Creating new exec pod Jun 5 00:31:45.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3031 execpod-affinitymppdw -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jun 5 00:31:45.895: INFO: stderr: "I0605 00:31:45.165333 2141 log.go:172] (0xc0008e2420) (0xc000867ea0) Create stream\nI0605 00:31:45.165524 2141 log.go:172] (0xc0008e2420) (0xc000867ea0) Stream added, broadcasting: 1\nI0605 00:31:45.168472 2141 log.go:172] (0xc0008e2420) Reply frame received for 1\nI0605 00:31:45.168500 2141 log.go:172] (0xc0008e2420) (0xc000526d20) Create stream\nI0605 00:31:45.168509 2141 log.go:172] (0xc0008e2420) (0xc000526d20) Stream added, broadcasting: 3\nI0605 00:31:45.169579 2141 log.go:172] (0xc0008e2420) Reply frame received for 3\nI0605 00:31:45.169613 2141 log.go:172] (0xc0008e2420) (0xc0003777c0) Create stream\nI0605 00:31:45.169627 2141 log.go:172] (0xc0008e2420) (0xc0003777c0) Stream added, broadcasting: 5\nI0605 00:31:45.170529 2141 log.go:172] (0xc0008e2420) Reply frame received for 5\nI0605 00:31:45.245446 2141 log.go:172] (0xc0008e2420) Data frame received for 5\nI0605 00:31:45.245490 2141 log.go:172] (0xc0003777c0) (5) Data frame handling\nI0605 00:31:45.245518 2141 log.go:172] (0xc0003777c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0605 00:31:45.886445 2141 log.go:172] (0xc0008e2420) Data frame received for 5\nI0605 00:31:45.886469 2141 log.go:172] (0xc0003777c0) (5) Data frame handling\nI0605 00:31:45.886482 2141 log.go:172] (0xc0003777c0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0605 00:31:45.886800 2141 log.go:172] (0xc0008e2420) Data frame received for 3\nI0605 00:31:45.886828 2141 log.go:172] (0xc000526d20) (3) Data frame handling\nI0605 00:31:45.887332 2141 log.go:172] (0xc0008e2420) Data frame received for 5\nI0605 00:31:45.887344 2141 log.go:172] (0xc0003777c0) (5) Data frame handling\nI0605 00:31:45.889365 2141 log.go:172] (0xc0008e2420) Data frame received for 1\nI0605 00:31:45.889388 2141 log.go:172] (0xc000867ea0) (1) Data frame handling\nI0605 00:31:45.889399 2141 log.go:172] (0xc000867ea0) (1) Data frame sent\nI0605 00:31:45.889411 2141 log.go:172] (0xc0008e2420) (0xc000867ea0) Stream removed, broadcasting: 1\nI0605 00:31:45.889424 2141 log.go:172] (0xc0008e2420) Go away received\nI0605 00:31:45.889841 2141 log.go:172] (0xc0008e2420) (0xc000867ea0) Stream removed, broadcasting: 1\nI0605 00:31:45.889858 2141 log.go:172] (0xc0008e2420) (0xc000526d20) Stream removed, broadcasting: 3\nI0605 00:31:45.889867 2141 log.go:172] (0xc0008e2420) (0xc0003777c0) Stream removed, broadcasting: 5\n" Jun 5 00:31:45.895: INFO: stdout: "" Jun 5 00:31:45.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3031 execpod-affinitymppdw -- /bin/sh -x -c nc -zv -t -w 2 10.111.245.81 80' Jun 5 00:31:46.148: INFO: stderr: "I0605 00:31:46.071790 2161 log.go:172] (0xc000b21290) (0xc000ada6e0) Create stream\nI0605 00:31:46.071841 2161 log.go:172] (0xc000b21290) (0xc000ada6e0) Stream added, broadcasting: 1\nI0605 00:31:46.077715 2161 log.go:172] (0xc000b21290) Reply frame received for 1\nI0605 00:31:46.077772 2161 log.go:172] (0xc000b21290) (0xc00083c5a0) Create stream\nI0605 00:31:46.077792 2161 log.go:172] (0xc000b21290) (0xc00083c5a0) Stream added, broadcasting: 3\nI0605 00:31:46.079083 2161 log.go:172] (0xc000b21290) Reply frame received for 3\nI0605 00:31:46.079156 2161 log.go:172] (0xc000b21290) (0xc000684280) Create stream\nI0605 00:31:46.079170 2161 log.go:172] (0xc000b21290) (0xc000684280) Stream added, broadcasting: 5\nI0605 00:31:46.080065 2161 log.go:172] (0xc000b21290) Reply frame received for 5\nI0605 00:31:46.141731 2161 log.go:172] (0xc000b21290) Data frame received for 3\nI0605 00:31:46.141752 2161 log.go:172] (0xc00083c5a0) (3) Data frame handling\nI0605 00:31:46.141907 2161 log.go:172] (0xc000b21290) Data frame received for 5\nI0605 00:31:46.141930 2161 log.go:172] (0xc000684280) (5) Data frame handling\nI0605 00:31:46.141950 2161 log.go:172] (0xc000684280) (5) Data frame sent\n+ nc -zv -t -w 2 10.111.245.81 80\nConnection to 10.111.245.81 80 port [tcp/http] succeeded!\nI0605 00:31:46.142014 2161 log.go:172] (0xc000b21290) Data frame received for 5\nI0605 00:31:46.142022 2161 log.go:172] (0xc000684280) (5) Data frame handling\nI0605 00:31:46.143744 2161 log.go:172] (0xc000b21290) Data frame received for 1\nI0605 00:31:46.143759 2161 log.go:172] (0xc000ada6e0) (1) Data frame handling\nI0605 00:31:46.143770 2161 log.go:172] (0xc000ada6e0) (1) Data frame sent\nI0605 00:31:46.143779 2161 log.go:172] (0xc000b21290) (0xc000ada6e0) Stream removed, broadcasting: 1\nI0605 00:31:46.143816 2161 log.go:172] (0xc000b21290) Go away received\nI0605 00:31:46.144053 2161 log.go:172] (0xc000b21290) (0xc000ada6e0) Stream removed, broadcasting: 1\nI0605 00:31:46.144070 2161 log.go:172] (0xc000b21290) (0xc00083c5a0) Stream removed, broadcasting: 3\nI0605 00:31:46.144077 2161 log.go:172] (0xc000b21290) (0xc000684280) Stream removed, broadcasting: 5\n" Jun 5 00:31:46.148: INFO: stdout: "" Jun 5 00:31:46.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3031 execpod-affinitymppdw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32381' Jun 5 00:31:47.558: INFO: stderr: "I0605 00:31:47.351169 2182 log.go:172] (0xc000954000) (0xc000520140) Create stream\nI0605 00:31:47.351220 2182 log.go:172] (0xc000954000) (0xc000520140) Stream added, broadcasting: 1\nI0605 00:31:47.353579 2182 log.go:172] (0xc000954000) Reply frame received for 1\nI0605 00:31:47.353615 2182 log.go:172] (0xc000954000) (0xc00047ac80) Create stream\nI0605 00:31:47.353628 2182 log.go:172] (0xc000954000) (0xc00047ac80) Stream added, broadcasting: 3\nI0605 00:31:47.354280 2182 log.go:172] (0xc000954000) Reply frame received for 3\nI0605 00:31:47.354310 2182 log.go:172] (0xc000954000) (0xc0005210e0) Create stream\nI0605 00:31:47.354327 2182 log.go:172] (0xc000954000) (0xc0005210e0) Stream added, broadcasting: 5\nI0605 00:31:47.355002 2182 log.go:172] (0xc000954000) Reply frame received for 5\nI0605 00:31:47.538646 2182 log.go:172] (0xc000954000) Data frame received for 5\nI0605 00:31:47.538670 2182 log.go:172] (0xc0005210e0) (5) Data frame handling\nI0605 00:31:47.538686 2182 log.go:172] (0xc0005210e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32381\nI0605 00:31:47.550251 2182 log.go:172] (0xc000954000) Data frame received for 5\nI0605 00:31:47.550292 2182 log.go:172] (0xc0005210e0) (5) Data frame handling\nI0605 00:31:47.550321 2182 log.go:172] (0xc0005210e0) (5) Data frame sent\nConnection to 172.17.0.13 32381 port [tcp/32381] succeeded!\nI0605 00:31:47.550605 2182 log.go:172] (0xc000954000) Data frame received for 3\nI0605 00:31:47.550636 2182 log.go:172] (0xc00047ac80) (3) Data frame handling\nI0605 00:31:47.550817 2182 log.go:172] (0xc000954000) Data frame received for 5\nI0605 00:31:47.550840 2182 log.go:172] (0xc0005210e0) (5) Data frame handling\nI0605 00:31:47.552359 2182 log.go:172] (0xc000954000) Data frame received for 1\nI0605 00:31:47.552373 2182 log.go:172] (0xc000520140) (1) Data frame handling\nI0605 00:31:47.552396 2182 log.go:172] (0xc000520140) (1) Data frame sent\nI0605 00:31:47.552404 2182 log.go:172] (0xc000954000) (0xc000520140) Stream removed, broadcasting: 1\nI0605 00:31:47.552619 2182 log.go:172] (0xc000954000) Go away received\nI0605 00:31:47.552672 2182 log.go:172] (0xc000954000) (0xc000520140) Stream removed, broadcasting: 1\nI0605 00:31:47.552699 2182 log.go:172] (0xc000954000) (0xc00047ac80) Stream removed, broadcasting: 3\nI0605 00:31:47.552719 2182 log.go:172] (0xc000954000) (0xc0005210e0) Stream removed, broadcasting: 5\n" Jun 5 00:31:47.558: INFO: stdout: "" Jun 5 00:31:47.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3031 execpod-affinitymppdw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32381' Jun 5 00:31:47.769: INFO: stderr: "I0605 00:31:47.689488 2202 log.go:172] (0xc000ab7080) (0xc000ac8460) Create stream\nI0605 00:31:47.689537 2202 log.go:172] (0xc000ab7080) (0xc000ac8460) Stream added, broadcasting: 1\nI0605 00:31:47.693685 2202 log.go:172] (0xc000ab7080) Reply frame received for 1\nI0605 00:31:47.693720 2202 log.go:172] (0xc000ab7080) (0xc0005500a0) Create stream\nI0605 00:31:47.693731 2202 log.go:172] (0xc000ab7080) (0xc0005500a0) Stream added, broadcasting: 3\nI0605 00:31:47.694542 2202 log.go:172] (0xc000ab7080) Reply frame received for 3\nI0605 00:31:47.694585 2202 log.go:172] (0xc000ab7080) (0xc0004d6be0) Create stream\nI0605 00:31:47.694600 2202 log.go:172] (0xc000ab7080) (0xc0004d6be0) Stream added, broadcasting: 5\nI0605 00:31:47.695443 2202 log.go:172] (0xc000ab7080) Reply frame received for 5\nI0605 00:31:47.760308 2202 log.go:172] (0xc000ab7080) Data frame received for 5\nI0605 00:31:47.760349 2202 log.go:172] (0xc0004d6be0) (5) Data frame handling\nI0605 00:31:47.760356 2202 log.go:172] (0xc0004d6be0) (5) Data frame sent\nI0605 00:31:47.760361 2202 log.go:172] (0xc000ab7080) Data frame received for 5\nI0605 00:31:47.760365 2202 log.go:172] (0xc0004d6be0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32381\nConnection to 172.17.0.12 32381 port [tcp/32381] succeeded!\nI0605 00:31:47.760398 2202 log.go:172] (0xc000ab7080) Data frame received for 3\nI0605 00:31:47.760440 2202 log.go:172] (0xc0005500a0) (3) Data frame handling\nI0605 00:31:47.762180 2202 log.go:172] (0xc000ab7080) Data frame received for 1\nI0605 00:31:47.762220 2202 log.go:172] (0xc000ac8460) (1) Data frame handling\nI0605 00:31:47.762259 2202 log.go:172] (0xc000ac8460) (1) Data frame sent\nI0605 00:31:47.762295 2202 log.go:172] (0xc000ab7080) (0xc000ac8460) Stream removed, broadcasting: 1\nI0605 00:31:47.762442 2202 log.go:172] (0xc000ab7080) Go away received\nI0605 00:31:47.762864 2202 log.go:172] (0xc000ab7080) (0xc000ac8460) Stream removed, broadcasting: 1\nI0605 00:31:47.762887 2202 log.go:172] (0xc000ab7080) (0xc0005500a0) Stream removed, broadcasting: 3\nI0605 00:31:47.762902 2202 log.go:172] (0xc000ab7080) (0xc0004d6be0) Stream removed, broadcasting: 5\n" Jun 5 00:31:47.769: INFO: stdout: "" Jun 5 00:31:47.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3031 execpod-affinitymppdw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32381/ ; done' Jun 5 00:31:48.032: INFO: stderr: "I0605 00:31:47.893308 2224 log.go:172] (0xc000ab91e0) (0xc0006d3b80) Create stream\nI0605 00:31:47.893369 2224 log.go:172] (0xc000ab91e0) (0xc0006d3b80) Stream added, broadcasting: 1\nI0605 00:31:47.895908 2224 log.go:172] (0xc000ab91e0) Reply frame received for 1\nI0605 00:31:47.895955 2224 log.go:172] (0xc000ab91e0) (0xc00067ef00) Create stream\nI0605 00:31:47.895967 2224 log.go:172] (0xc000ab91e0) (0xc00067ef00) Stream added, broadcasting: 3\nI0605 00:31:47.896788 2224 log.go:172] (0xc000ab91e0) Reply frame received for 3\nI0605 00:31:47.896839 2224 log.go:172] (0xc000ab91e0) (0xc00062b2c0) Create stream\nI0605 00:31:47.896869 2224 log.go:172] (0xc000ab91e0) (0xc00062b2c0) Stream added, broadcasting: 5\nI0605 00:31:47.897837 2224 log.go:172] (0xc000ab91e0) Reply frame received for 5\nI0605 00:31:47.949593 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.949623 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.949632 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.949648 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.949654 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.949659 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.958353 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.958378 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.958392 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.959005 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.959024 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.959034 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.959058 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.959078 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.959121 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.964073 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.964105 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.964136 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.964603 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.964631 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.964649 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.964685 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.964712 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.964745 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.969562 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.969607 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.969632 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.970102 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.970127 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.970146 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.970167 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.970197 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.970229 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.973698 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.973711 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.973717 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.973898 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.973909 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.973915 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0605 00:31:47.974020 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.974048 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.974071 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.974109 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.974136 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.974181 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n http://172.17.0.13:32381/\nI0605 00:31:47.977324 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.977339 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.977498 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.977837 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.977874 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.977895 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.977955 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.977976 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.977996 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.980914 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.980950 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.980982 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.981459 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.981487 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.981499 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -sI0605 00:31:47.981510 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.981520 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.981531 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.981546 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.981556 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.981566 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.985822 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.985846 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.985864 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.986294 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.986321 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.986334 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.986356 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.986365 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.986376 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.989415 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.989444 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.989458 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.989993 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.990035 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.990077 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.990109 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.990133 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.990152 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.994331 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.994353 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.994366 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.994706 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.994728 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.994746 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:47.994827 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.994848 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.994874 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.998447 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.998482 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.998516 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.998885 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:47.998908 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:47.998921 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:47.998938 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:47.998968 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:47.998986 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:48.003230 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:48.003263 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.003308 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.003326 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.003342 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.003362 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.003403 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:48.003457 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\nI0605 00:31:48.003482 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:48.006840 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.006877 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.006917 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.007162 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.007189 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.007201 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.007222 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:48.007232 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:48.007244 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:48.011096 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.011117 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.011131 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.011538 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.011586 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.011613 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.011648 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:48.011685 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:48.011719 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:48.014911 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.014960 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.015003 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.015320 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.015364 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.015383 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.015406 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:48.015436 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:48.015472 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:48.019805 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.019843 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.019868 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.020291 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.020314 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.020337 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:48.020374 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:48.020395 2224 log.go:172] (0xc00062b2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32381/\nI0605 00:31:48.020420 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.023660 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.023695 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.023716 2224 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0605 00:31:48.024231 2224 log.go:172] (0xc000ab91e0) Data frame received for 5\nI0605 00:31:48.024267 2224 log.go:172] (0xc000ab91e0) Data frame received for 3\nI0605 00:31:48.024319 2224 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0605 00:31:48.024349 2224 log.go:172] (0xc00062b2c0) (5) Data frame handling\nI0605 00:31:48.026971 2224 log.go:172] (0xc000ab91e0) Data frame received for 1\nI0605 00:31:48.027044 2224 log.go:172] (0xc0006d3b80) (1) Data frame handling\nI0605 00:31:48.027090 2224 log.go:172] (0xc0006d3b80) (1) Data frame sent\nI0605 00:31:48.027120 2224 log.go:172] (0xc000ab91e0) (0xc0006d3b80) Stream removed, broadcasting: 1\nI0605 00:31:48.027173 2224 log.go:172] (0xc000ab91e0) Go away received\nI0605 00:31:48.027764 2224 log.go:172] (0xc000ab91e0) (0xc0006d3b80) Stream removed, broadcasting: 1\nI0605 00:31:48.027801 2224 log.go:172] (0xc000ab91e0) (0xc00067ef00) Stream removed, broadcasting: 3\nI0605 00:31:48.027826 2224 log.go:172] (0xc000ab91e0) (0xc00062b2c0) Stream removed, broadcasting: 5\n" Jun 5 00:31:48.034: INFO: stdout: "\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp\naffinity-nodeport-wxgvp" Jun 5 00:31:48.034: INFO: Received response from host: Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Received response from host: affinity-nodeport-wxgvp Jun 5 00:31:48.034: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-3031, will wait for the garbage collector to delete the pods Jun 5 00:31:48.157: INFO: Deleting ReplicationController affinity-nodeport took: 6.750219ms Jun 5 00:31:48.658: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.252512ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:31:53.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3031" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.279 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":188,"skipped":3078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:31:53.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 5 00:31:58.017: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:31:58.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7426" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":189,"skipped":3109,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:31:58.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:32:03.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1575" for this suite. • [SLOW TEST:5.189 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":190,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:32:03.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 5 00:32:13.467: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:13.467: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:13.515656 7 log.go:172] (0xc001b01760) (0xc001efad20) Create stream I0605 00:32:13.515699 7 log.go:172] (0xc001b01760) (0xc001efad20) Stream added, broadcasting: 1 I0605 00:32:13.518425 7 log.go:172] (0xc001b01760) Reply frame received for 1 I0605 00:32:13.518481 7 log.go:172] (0xc001b01760) (0xc000c248c0) Create stream I0605 00:32:13.518503 7 log.go:172] (0xc001b01760) (0xc000c248c0) Stream added, broadcasting: 3 I0605 00:32:13.519712 7 log.go:172] (0xc001b01760) Reply frame received for 3 I0605 00:32:13.519780 7 log.go:172] (0xc001b01760) (0xc001efaf00) Create stream I0605 00:32:13.519796 7 log.go:172] (0xc001b01760) (0xc001efaf00) Stream added, broadcasting: 5 I0605 00:32:13.520674 7 log.go:172] (0xc001b01760) Reply frame received for 5 I0605 00:32:13.606726 7 log.go:172] (0xc001b01760) Data frame received for 3 I0605 00:32:13.606752 7 log.go:172] (0xc000c248c0) (3) Data frame handling I0605 00:32:13.606763 7 log.go:172] (0xc000c248c0) (3) Data frame sent I0605 00:32:13.606771 7 log.go:172] (0xc001b01760) Data frame received for 3 I0605 00:32:13.606778 7 log.go:172] (0xc000c248c0) (3) Data frame handling I0605 00:32:13.606863 7 log.go:172] (0xc001b01760) Data frame received for 5 I0605 00:32:13.606893 7 log.go:172] (0xc001efaf00) (5) Data frame handling I0605 00:32:13.608262 7 log.go:172] (0xc001b01760) Data frame received for 1 I0605 00:32:13.608273 7 log.go:172] (0xc001efad20) (1) Data frame handling I0605 00:32:13.608286 7 log.go:172] (0xc001efad20) (1) Data frame sent I0605 00:32:13.608308 7 log.go:172] (0xc001b01760) (0xc001efad20) Stream removed, broadcasting: 1 I0605 00:32:13.608384 7 log.go:172] (0xc001b01760) Go away received I0605 00:32:13.608410 7 log.go:172] (0xc001b01760) (0xc001efad20) Stream removed, broadcasting: 1 I0605 00:32:13.608426 7 log.go:172] (0xc001b01760) (0xc000c248c0) Stream removed, broadcasting: 3 I0605 00:32:13.608495 7 log.go:172] (0xc001b01760) (0xc001efaf00) Stream removed, broadcasting: 5 Jun 5 00:32:13.608: INFO: Exec stderr: "" Jun 5 00:32:13.608: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:13.608: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:13.638146 7 log.go:172] (0xc002a073f0) (0xc001efb4a0) Create stream I0605 00:32:13.638185 7 log.go:172] (0xc002a073f0) (0xc001efb4a0) Stream added, broadcasting: 1 I0605 00:32:13.658116 7 log.go:172] (0xc002a073f0) Reply frame received for 1 I0605 00:32:13.658174 7 log.go:172] (0xc002a073f0) (0xc0011aa280) Create stream I0605 00:32:13.658195 7 log.go:172] (0xc002a073f0) (0xc0011aa280) Stream added, broadcasting: 3 I0605 00:32:13.659162 7 log.go:172] (0xc002a073f0) Reply frame received for 3 I0605 00:32:13.659200 7 log.go:172] (0xc002a073f0) (0xc0011aa820) Create stream I0605 00:32:13.659211 7 log.go:172] (0xc002a073f0) (0xc0011aa820) Stream added, broadcasting: 5 I0605 00:32:13.659902 7 log.go:172] (0xc002a073f0) Reply frame received for 5 I0605 00:32:13.716248 7 log.go:172] (0xc002a073f0) Data frame received for 5 I0605 00:32:13.716290 7 log.go:172] (0xc0011aa820) (5) Data frame handling I0605 00:32:13.716314 7 log.go:172] (0xc002a073f0) Data frame received for 3 I0605 00:32:13.716325 7 log.go:172] (0xc0011aa280) (3) Data frame handling I0605 00:32:13.716338 7 log.go:172] (0xc0011aa280) (3) Data frame sent I0605 00:32:13.716352 7 log.go:172] (0xc002a073f0) Data frame received for 3 I0605 00:32:13.716369 7 log.go:172] (0xc0011aa280) (3) Data frame handling I0605 00:32:13.718061 7 log.go:172] (0xc002a073f0) Data frame received for 1 I0605 00:32:13.718087 7 log.go:172] (0xc001efb4a0) (1) Data frame handling I0605 00:32:13.718101 7 log.go:172] (0xc001efb4a0) (1) Data frame sent I0605 00:32:13.718121 7 log.go:172] (0xc002a073f0) (0xc001efb4a0) Stream removed, broadcasting: 1 I0605 00:32:13.718142 7 log.go:172] (0xc002a073f0) Go away received I0605 00:32:13.718204 7 log.go:172] (0xc002a073f0) (0xc001efb4a0) Stream removed, broadcasting: 1 I0605 00:32:13.718217 7 log.go:172] (0xc002a073f0) (0xc0011aa280) Stream removed, broadcasting: 3 I0605 00:32:13.718222 7 log.go:172] (0xc002a073f0) (0xc0011aa820) Stream removed, broadcasting: 5 Jun 5 00:32:13.718: INFO: Exec stderr: "" Jun 5 00:32:13.718: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:13.718: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:13.746692 7 log.go:172] (0xc002e56580) (0xc0010795e0) Create stream I0605 00:32:13.746720 7 log.go:172] (0xc002e56580) (0xc0010795e0) Stream added, broadcasting: 1 I0605 00:32:13.748252 7 log.go:172] (0xc002e56580) Reply frame received for 1 I0605 00:32:13.748294 7 log.go:172] (0xc002e56580) (0xc001efb720) Create stream I0605 00:32:13.748314 7 log.go:172] (0xc002e56580) (0xc001efb720) Stream added, broadcasting: 3 I0605 00:32:13.749224 7 log.go:172] (0xc002e56580) Reply frame received for 3 I0605 00:32:13.749262 7 log.go:172] (0xc002e56580) (0xc0018240a0) Create stream I0605 00:32:13.749275 7 log.go:172] (0xc002e56580) (0xc0018240a0) Stream added, broadcasting: 5 I0605 00:32:13.750133 7 log.go:172] (0xc002e56580) Reply frame received for 5 I0605 00:32:13.810571 7 log.go:172] (0xc002e56580) Data frame received for 3 I0605 00:32:13.810684 7 log.go:172] (0xc001efb720) (3) Data frame handling I0605 00:32:13.810718 7 log.go:172] (0xc001efb720) (3) Data frame sent I0605 00:32:13.810751 7 log.go:172] (0xc002e56580) Data frame received for 3 I0605 00:32:13.810788 7 log.go:172] (0xc002e56580) Data frame received for 5 I0605 00:32:13.810818 7 log.go:172] (0xc0018240a0) (5) Data frame handling I0605 00:32:13.810865 7 log.go:172] (0xc001efb720) (3) Data frame handling I0605 00:32:13.812236 7 log.go:172] (0xc002e56580) Data frame received for 1 I0605 00:32:13.812277 7 log.go:172] (0xc0010795e0) (1) Data frame handling I0605 00:32:13.812319 7 log.go:172] (0xc0010795e0) (1) Data frame sent I0605 00:32:13.812457 7 log.go:172] (0xc002e56580) (0xc0010795e0) Stream removed, broadcasting: 1 I0605 00:32:13.812485 7 log.go:172] (0xc002e56580) Go away received I0605 00:32:13.812604 7 log.go:172] (0xc002e56580) (0xc0010795e0) Stream removed, broadcasting: 1 I0605 00:32:13.812643 7 log.go:172] (0xc002e56580) (0xc001efb720) Stream removed, broadcasting: 3 I0605 00:32:13.812669 7 log.go:172] (0xc002e56580) (0xc0018240a0) Stream removed, broadcasting: 5 Jun 5 00:32:13.812: INFO: Exec stderr: "" Jun 5 00:32:13.812: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:13.812: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:13.844766 7 log.go:172] (0xc002a07a20) (0xc001efbe00) Create stream I0605 00:32:13.844802 7 log.go:172] (0xc002a07a20) (0xc001efbe00) Stream added, broadcasting: 1 I0605 00:32:13.847040 7 log.go:172] (0xc002a07a20) Reply frame received for 1 I0605 00:32:13.847093 7 log.go:172] (0xc002a07a20) (0xc000c24d20) Create stream I0605 00:32:13.847116 7 log.go:172] (0xc002a07a20) (0xc000c24d20) Stream added, broadcasting: 3 I0605 00:32:13.848019 7 log.go:172] (0xc002a07a20) Reply frame received for 3 I0605 00:32:13.848063 7 log.go:172] (0xc002a07a20) (0xc000c24dc0) Create stream I0605 00:32:13.848086 7 log.go:172] (0xc002a07a20) (0xc000c24dc0) Stream added, broadcasting: 5 I0605 00:32:13.849052 7 log.go:172] (0xc002a07a20) Reply frame received for 5 I0605 00:32:13.909028 7 log.go:172] (0xc002a07a20) Data frame received for 5 I0605 00:32:13.909073 7 log.go:172] (0xc000c24dc0) (5) Data frame handling I0605 00:32:13.909311 7 log.go:172] (0xc002a07a20) Data frame received for 3 I0605 00:32:13.909343 7 log.go:172] (0xc000c24d20) (3) Data frame handling I0605 00:32:13.909369 7 log.go:172] (0xc000c24d20) (3) Data frame sent I0605 00:32:13.909384 7 log.go:172] (0xc002a07a20) Data frame received for 3 I0605 00:32:13.909399 7 log.go:172] (0xc000c24d20) (3) Data frame handling I0605 00:32:13.910714 7 log.go:172] (0xc002a07a20) Data frame received for 1 I0605 00:32:13.910753 7 log.go:172] (0xc001efbe00) (1) Data frame handling I0605 00:32:13.910783 7 log.go:172] (0xc001efbe00) (1) Data frame sent I0605 00:32:13.910804 7 log.go:172] (0xc002a07a20) (0xc001efbe00) Stream removed, broadcasting: 1 I0605 00:32:13.910834 7 log.go:172] (0xc002a07a20) Go away received I0605 00:32:13.910980 7 log.go:172] (0xc002a07a20) (0xc001efbe00) Stream removed, broadcasting: 1 I0605 00:32:13.911015 7 log.go:172] (0xc002a07a20) (0xc000c24d20) Stream removed, broadcasting: 3 I0605 00:32:13.911026 7 log.go:172] (0xc002a07a20) (0xc000c24dc0) Stream removed, broadcasting: 5 Jun 5 00:32:13.911: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 5 00:32:13.911: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:13.911: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:13.943562 7 log.go:172] (0xc002e56bb0) (0xc0013baaa0) Create stream I0605 00:32:13.943588 7 log.go:172] (0xc002e56bb0) (0xc0013baaa0) Stream added, broadcasting: 1 I0605 00:32:13.945752 7 log.go:172] (0xc002e56bb0) Reply frame received for 1 I0605 00:32:13.945817 7 log.go:172] (0xc002e56bb0) (0xc001efbea0) Create stream I0605 00:32:13.945847 7 log.go:172] (0xc002e56bb0) (0xc001efbea0) Stream added, broadcasting: 3 I0605 00:32:13.946798 7 log.go:172] (0xc002e56bb0) Reply frame received for 3 I0605 00:32:13.946859 7 log.go:172] (0xc002e56bb0) (0xc000c24fa0) Create stream I0605 00:32:13.946888 7 log.go:172] (0xc002e56bb0) (0xc000c24fa0) Stream added, broadcasting: 5 I0605 00:32:13.947929 7 log.go:172] (0xc002e56bb0) Reply frame received for 5 I0605 00:32:14.033353 7 log.go:172] (0xc002e56bb0) Data frame received for 5 I0605 00:32:14.033388 7 log.go:172] (0xc000c24fa0) (5) Data frame handling I0605 00:32:14.033407 7 log.go:172] (0xc002e56bb0) Data frame received for 3 I0605 00:32:14.033417 7 log.go:172] (0xc001efbea0) (3) Data frame handling I0605 00:32:14.033429 7 log.go:172] (0xc001efbea0) (3) Data frame sent I0605 00:32:14.033438 7 log.go:172] (0xc002e56bb0) Data frame received for 3 I0605 00:32:14.033448 7 log.go:172] (0xc001efbea0) (3) Data frame handling I0605 00:32:14.034732 7 log.go:172] (0xc002e56bb0) Data frame received for 1 I0605 00:32:14.034772 7 log.go:172] (0xc0013baaa0) (1) Data frame handling I0605 00:32:14.034789 7 log.go:172] (0xc0013baaa0) (1) Data frame sent I0605 00:32:14.034802 7 log.go:172] (0xc002e56bb0) (0xc0013baaa0) Stream removed, broadcasting: 1 I0605 00:32:14.034815 7 log.go:172] (0xc002e56bb0) Go away received I0605 00:32:14.034934 7 log.go:172] (0xc002e56bb0) (0xc0013baaa0) Stream removed, broadcasting: 1 I0605 00:32:14.034949 7 log.go:172] (0xc002e56bb0) (0xc001efbea0) Stream removed, broadcasting: 3 I0605 00:32:14.034957 7 log.go:172] (0xc002e56bb0) (0xc000c24fa0) Stream removed, broadcasting: 5 Jun 5 00:32:14.034: INFO: Exec stderr: "" Jun 5 00:32:14.034: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:14.035: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:14.064764 7 log.go:172] (0xc002e571e0) (0xc0013bb540) Create stream I0605 00:32:14.064793 7 log.go:172] (0xc002e571e0) (0xc0013bb540) Stream added, broadcasting: 1 I0605 00:32:14.066771 7 log.go:172] (0xc002e571e0) Reply frame received for 1 I0605 00:32:14.066808 7 log.go:172] (0xc002e571e0) (0xc000c25540) Create stream I0605 00:32:14.066820 7 log.go:172] (0xc002e571e0) (0xc000c25540) Stream added, broadcasting: 3 I0605 00:32:14.067736 7 log.go:172] (0xc002e571e0) Reply frame received for 3 I0605 00:32:14.067762 7 log.go:172] (0xc002e571e0) (0xc0011ab040) Create stream I0605 00:32:14.067771 7 log.go:172] (0xc002e571e0) (0xc0011ab040) Stream added, broadcasting: 5 I0605 00:32:14.068540 7 log.go:172] (0xc002e571e0) Reply frame received for 5 I0605 00:32:14.133090 7 log.go:172] (0xc002e571e0) Data frame received for 3 I0605 00:32:14.133301 7 log.go:172] (0xc000c25540) (3) Data frame handling I0605 00:32:14.133317 7 log.go:172] (0xc000c25540) (3) Data frame sent I0605 00:32:14.133327 7 log.go:172] (0xc002e571e0) Data frame received for 3 I0605 00:32:14.133343 7 log.go:172] (0xc000c25540) (3) Data frame handling I0605 00:32:14.133358 7 log.go:172] (0xc002e571e0) Data frame received for 5 I0605 00:32:14.133376 7 log.go:172] (0xc0011ab040) (5) Data frame handling I0605 00:32:14.134635 7 log.go:172] (0xc002e571e0) Data frame received for 1 I0605 00:32:14.134668 7 log.go:172] (0xc0013bb540) (1) Data frame handling I0605 00:32:14.134693 7 log.go:172] (0xc0013bb540) (1) Data frame sent I0605 00:32:14.134715 7 log.go:172] (0xc002e571e0) (0xc0013bb540) Stream removed, broadcasting: 1 I0605 00:32:14.134745 7 log.go:172] (0xc002e571e0) Go away received I0605 00:32:14.134908 7 log.go:172] (0xc002e571e0) (0xc0013bb540) Stream removed, broadcasting: 1 I0605 00:32:14.134928 7 log.go:172] (0xc002e571e0) (0xc000c25540) Stream removed, broadcasting: 3 I0605 00:32:14.134938 7 log.go:172] (0xc002e571e0) (0xc0011ab040) Stream removed, broadcasting: 5 Jun 5 00:32:14.134: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 5 00:32:14.134: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:14.135: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:14.164072 7 log.go:172] (0xc002f702c0) (0xc001824780) Create stream I0605 00:32:14.164103 7 log.go:172] (0xc002f702c0) (0xc001824780) Stream added, broadcasting: 1 I0605 00:32:14.166180 7 log.go:172] (0xc002f702c0) Reply frame received for 1 I0605 00:32:14.166213 7 log.go:172] (0xc002f702c0) (0xc0011ab220) Create stream I0605 00:32:14.166223 7 log.go:172] (0xc002f702c0) (0xc0011ab220) Stream added, broadcasting: 3 I0605 00:32:14.167223 7 log.go:172] (0xc002f702c0) Reply frame received for 3 I0605 00:32:14.167254 7 log.go:172] (0xc002f702c0) (0xc0013bb680) Create stream I0605 00:32:14.167268 7 log.go:172] (0xc002f702c0) (0xc0013bb680) Stream added, broadcasting: 5 I0605 00:32:14.168302 7 log.go:172] (0xc002f702c0) Reply frame received for 5 I0605 00:32:14.225889 7 log.go:172] (0xc002f702c0) Data frame received for 3 I0605 00:32:14.225916 7 log.go:172] (0xc0011ab220) (3) Data frame handling I0605 00:32:14.225926 7 log.go:172] (0xc0011ab220) (3) Data frame sent I0605 00:32:14.225933 7 log.go:172] (0xc002f702c0) Data frame received for 3 I0605 00:32:14.225940 7 log.go:172] (0xc0011ab220) (3) Data frame handling I0605 00:32:14.225965 7 log.go:172] (0xc002f702c0) Data frame received for 5 I0605 00:32:14.225973 7 log.go:172] (0xc0013bb680) (5) Data frame handling I0605 00:32:14.227187 7 log.go:172] (0xc002f702c0) Data frame received for 1 I0605 00:32:14.227236 7 log.go:172] (0xc001824780) (1) Data frame handling I0605 00:32:14.227259 7 log.go:172] (0xc001824780) (1) Data frame sent I0605 00:32:14.227288 7 log.go:172] (0xc002f702c0) (0xc001824780) Stream removed, broadcasting: 1 I0605 00:32:14.227315 7 log.go:172] (0xc002f702c0) Go away received I0605 00:32:14.227389 7 log.go:172] (0xc002f702c0) (0xc001824780) Stream removed, broadcasting: 1 I0605 00:32:14.227409 7 log.go:172] (0xc002f702c0) (0xc0011ab220) Stream removed, broadcasting: 3 I0605 00:32:14.227419 7 log.go:172] (0xc002f702c0) (0xc0013bb680) Stream removed, broadcasting: 5 Jun 5 00:32:14.227: INFO: Exec stderr: "" Jun 5 00:32:14.227: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:14.227: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:14.256910 7 log.go:172] (0xc002f709a0) (0xc001825040) Create stream I0605 00:32:14.256940 7 log.go:172] (0xc002f709a0) (0xc001825040) Stream added, broadcasting: 1 I0605 00:32:14.258719 7 log.go:172] (0xc002f709a0) Reply frame received for 1 I0605 00:32:14.258755 7 log.go:172] (0xc002f709a0) (0xc001046140) Create stream I0605 00:32:14.258768 7 log.go:172] (0xc002f709a0) (0xc001046140) Stream added, broadcasting: 3 I0605 00:32:14.259658 7 log.go:172] (0xc002f709a0) Reply frame received for 3 I0605 00:32:14.259675 7 log.go:172] (0xc002f709a0) (0xc001046320) Create stream I0605 00:32:14.259681 7 log.go:172] (0xc002f709a0) (0xc001046320) Stream added, broadcasting: 5 I0605 00:32:14.260454 7 log.go:172] (0xc002f709a0) Reply frame received for 5 I0605 00:32:14.324579 7 log.go:172] (0xc002f709a0) Data frame received for 5 I0605 00:32:14.324635 7 log.go:172] (0xc001046320) (5) Data frame handling I0605 00:32:14.324668 7 log.go:172] (0xc002f709a0) Data frame received for 3 I0605 00:32:14.324687 7 log.go:172] (0xc001046140) (3) Data frame handling I0605 00:32:14.324718 7 log.go:172] (0xc001046140) (3) Data frame sent I0605 00:32:14.324744 7 log.go:172] (0xc002f709a0) Data frame received for 3 I0605 00:32:14.324756 7 log.go:172] (0xc001046140) (3) Data frame handling I0605 00:32:14.326560 7 log.go:172] (0xc002f709a0) Data frame received for 1 I0605 00:32:14.326599 7 log.go:172] (0xc001825040) (1) Data frame handling I0605 00:32:14.326620 7 log.go:172] (0xc001825040) (1) Data frame sent I0605 00:32:14.326642 7 log.go:172] (0xc002f709a0) (0xc001825040) Stream removed, broadcasting: 1 I0605 00:32:14.326761 7 log.go:172] (0xc002f709a0) (0xc001825040) Stream removed, broadcasting: 1 I0605 00:32:14.326794 7 log.go:172] (0xc002f709a0) (0xc001046140) Stream removed, broadcasting: 3 I0605 00:32:14.326811 7 log.go:172] (0xc002f709a0) (0xc001046320) Stream removed, broadcasting: 5 Jun 5 00:32:14.326: INFO: Exec stderr: "" Jun 5 00:32:14.326: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:14.326: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:14.326907 7 log.go:172] (0xc002f709a0) Go away received I0605 00:32:14.354767 7 log.go:172] (0xc002f70fd0) (0xc0018252c0) Create stream I0605 00:32:14.354794 7 log.go:172] (0xc002f70fd0) (0xc0018252c0) Stream added, broadcasting: 1 I0605 00:32:14.356765 7 log.go:172] (0xc002f70fd0) Reply frame received for 1 I0605 00:32:14.356803 7 log.go:172] (0xc002f70fd0) (0xc0011ab4a0) Create stream I0605 00:32:14.356818 7 log.go:172] (0xc002f70fd0) (0xc0011ab4a0) Stream added, broadcasting: 3 I0605 00:32:14.358137 7 log.go:172] (0xc002f70fd0) Reply frame received for 3 I0605 00:32:14.358175 7 log.go:172] (0xc002f70fd0) (0xc0010463c0) Create stream I0605 00:32:14.358190 7 log.go:172] (0xc002f70fd0) (0xc0010463c0) Stream added, broadcasting: 5 I0605 00:32:14.359085 7 log.go:172] (0xc002f70fd0) Reply frame received for 5 I0605 00:32:14.422603 7 log.go:172] (0xc002f70fd0) Data frame received for 3 I0605 00:32:14.422639 7 log.go:172] (0xc0011ab4a0) (3) Data frame handling I0605 00:32:14.422663 7 log.go:172] (0xc002f70fd0) Data frame received for 5 I0605 00:32:14.422725 7 log.go:172] (0xc0010463c0) (5) Data frame handling I0605 00:32:14.422756 7 log.go:172] (0xc0011ab4a0) (3) Data frame sent I0605 00:32:14.422776 7 log.go:172] (0xc002f70fd0) Data frame received for 3 I0605 00:32:14.422795 7 log.go:172] (0xc0011ab4a0) (3) Data frame handling I0605 00:32:14.424452 7 log.go:172] (0xc002f70fd0) Data frame received for 1 I0605 00:32:14.424473 7 log.go:172] (0xc0018252c0) (1) Data frame handling I0605 00:32:14.424488 7 log.go:172] (0xc0018252c0) (1) Data frame sent I0605 00:32:14.424499 7 log.go:172] (0xc002f70fd0) (0xc0018252c0) Stream removed, broadcasting: 1 I0605 00:32:14.424592 7 log.go:172] (0xc002f70fd0) (0xc0018252c0) Stream removed, broadcasting: 1 I0605 00:32:14.424605 7 log.go:172] (0xc002f70fd0) (0xc0011ab4a0) Stream removed, broadcasting: 3 I0605 00:32:14.424681 7 log.go:172] (0xc002f70fd0) Go away received I0605 00:32:14.424785 7 log.go:172] (0xc002f70fd0) (0xc0010463c0) Stream removed, broadcasting: 5 Jun 5 00:32:14.424: INFO: Exec stderr: "" Jun 5 00:32:14.424: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:14.424: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:14.457837 7 log.go:172] (0xc002950000) (0xc001046500) Create stream I0605 00:32:14.457864 7 log.go:172] (0xc002950000) (0xc001046500) Stream added, broadcasting: 1 I0605 00:32:14.459645 7 log.go:172] (0xc002950000) Reply frame received for 1 I0605 00:32:14.459694 7 log.go:172] (0xc002950000) (0xc001825680) Create stream I0605 00:32:14.459720 7 log.go:172] (0xc002950000) (0xc001825680) Stream added, broadcasting: 3 I0605 00:32:14.460869 7 log.go:172] (0xc002950000) Reply frame received for 3 I0605 00:32:14.460920 7 log.go:172] (0xc002950000) (0xc0011ab7c0) Create stream I0605 00:32:14.460943 7 log.go:172] (0xc002950000) (0xc0011ab7c0) Stream added, broadcasting: 5 I0605 00:32:14.462392 7 log.go:172] (0xc002950000) Reply frame received for 5 I0605 00:32:14.525987 7 log.go:172] (0xc002950000) Data frame received for 3 I0605 00:32:14.526026 7 log.go:172] (0xc001825680) (3) Data frame handling I0605 00:32:14.526043 7 log.go:172] (0xc001825680) (3) Data frame sent I0605 00:32:14.526055 7 log.go:172] (0xc002950000) Data frame received for 3 I0605 00:32:14.526064 7 log.go:172] (0xc001825680) (3) Data frame handling I0605 00:32:14.526101 7 log.go:172] (0xc002950000) Data frame received for 5 I0605 00:32:14.526118 7 log.go:172] (0xc0011ab7c0) (5) Data frame handling I0605 00:32:14.527325 7 log.go:172] (0xc002950000) Data frame received for 1 I0605 00:32:14.527345 7 log.go:172] (0xc001046500) (1) Data frame handling I0605 00:32:14.527365 7 log.go:172] (0xc001046500) (1) Data frame sent I0605 00:32:14.527380 7 log.go:172] (0xc002950000) (0xc001046500) Stream removed, broadcasting: 1 I0605 00:32:14.527407 7 log.go:172] (0xc002950000) Go away received I0605 00:32:14.527522 7 log.go:172] (0xc002950000) (0xc001046500) Stream removed, broadcasting: 1 I0605 00:32:14.527544 7 log.go:172] (0xc002950000) (0xc001825680) Stream removed, broadcasting: 3 I0605 00:32:14.527564 7 log.go:172] (0xc002950000) (0xc0011ab7c0) Stream removed, broadcasting: 5 Jun 5 00:32:14.527: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:32:14.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8832" for this suite. • [SLOW TEST:11.308 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":191,"skipped":3146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:32:14.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-541 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 5 00:32:14.603: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 5 00:32:14.720: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 5 00:32:16.725: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 5 00:32:18.739: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:32:20.724: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:32:22.733: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:32:24.733: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:32:26.727: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:32:28.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:32:30.724: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:32:32.725: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 5 00:32:32.731: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 5 00:32:34.736: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 5 00:32:36.735: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 5 00:32:38.735: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 5 00:32:44.797: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.202:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-541 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:44.798: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:44.833061 7 log.go:172] (0xc0027c8420) (0xc001750320) Create stream I0605 00:32:44.833104 7 log.go:172] (0xc0027c8420) (0xc001750320) Stream added, broadcasting: 1 I0605 00:32:44.835396 7 log.go:172] (0xc0027c8420) Reply frame received for 1 I0605 00:32:44.835439 7 log.go:172] (0xc0027c8420) (0xc0010466e0) Create stream I0605 00:32:44.835458 7 log.go:172] (0xc0027c8420) (0xc0010466e0) Stream added, broadcasting: 3 I0605 00:32:44.836333 7 log.go:172] (0xc0027c8420) Reply frame received for 3 I0605 00:32:44.836365 7 log.go:172] (0xc0027c8420) (0xc0017503c0) Create stream I0605 00:32:44.836375 7 log.go:172] (0xc0027c8420) (0xc0017503c0) Stream added, broadcasting: 5 I0605 00:32:44.837459 7 log.go:172] (0xc0027c8420) Reply frame received for 5 I0605 00:32:44.897827 7 log.go:172] (0xc0027c8420) Data frame received for 3 I0605 00:32:44.897876 7 log.go:172] (0xc0010466e0) (3) Data frame handling I0605 00:32:44.897901 7 log.go:172] (0xc0010466e0) (3) Data frame sent I0605 00:32:44.898053 7 log.go:172] (0xc0027c8420) Data frame received for 3 I0605 00:32:44.898077 7 log.go:172] (0xc0010466e0) (3) Data frame handling I0605 00:32:44.898279 7 log.go:172] (0xc0027c8420) Data frame received for 5 I0605 00:32:44.898293 7 log.go:172] (0xc0017503c0) (5) Data frame handling I0605 00:32:44.900098 7 log.go:172] (0xc0027c8420) Data frame received for 1 I0605 00:32:44.900119 7 log.go:172] (0xc001750320) (1) Data frame handling I0605 00:32:44.900131 7 log.go:172] (0xc001750320) (1) Data frame sent I0605 00:32:44.900145 7 log.go:172] (0xc0027c8420) (0xc001750320) Stream removed, broadcasting: 1 I0605 00:32:44.900173 7 log.go:172] (0xc0027c8420) Go away received I0605 00:32:44.900292 7 log.go:172] (0xc0027c8420) (0xc001750320) Stream removed, broadcasting: 1 I0605 00:32:44.900315 7 log.go:172] (0xc0027c8420) (0xc0010466e0) Stream removed, broadcasting: 3 I0605 00:32:44.900326 7 log.go:172] (0xc0027c8420) (0xc0017503c0) Stream removed, broadcasting: 5 Jun 5 00:32:44.900: INFO: Found all expected endpoints: [netserver-0] Jun 5 00:32:44.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.110:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-541 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:32:44.903: INFO: >>> kubeConfig: /root/.kube/config I0605 00:32:44.929067 7 log.go:172] (0xc0029e6790) (0xc002479180) Create stream I0605 00:32:44.929095 7 log.go:172] (0xc0029e6790) (0xc002479180) Stream added, broadcasting: 1 I0605 00:32:44.931305 7 log.go:172] (0xc0029e6790) Reply frame received for 1 I0605 00:32:44.931342 7 log.go:172] (0xc0029e6790) (0xc001750460) Create stream I0605 00:32:44.931354 7 log.go:172] (0xc0029e6790) (0xc001750460) Stream added, broadcasting: 3 I0605 00:32:44.932573 7 log.go:172] (0xc0029e6790) Reply frame received for 3 I0605 00:32:44.932608 7 log.go:172] (0xc0029e6790) (0xc001750500) Create stream I0605 00:32:44.932620 7 log.go:172] (0xc0029e6790) (0xc001750500) Stream added, broadcasting: 5 I0605 00:32:44.933477 7 log.go:172] (0xc0029e6790) Reply frame received for 5 I0605 00:32:45.014857 7 log.go:172] (0xc0029e6790) Data frame received for 3 I0605 00:32:45.014898 7 log.go:172] (0xc001750460) (3) Data frame handling I0605 00:32:45.014921 7 log.go:172] (0xc001750460) (3) Data frame sent I0605 00:32:45.014937 7 log.go:172] (0xc0029e6790) Data frame received for 3 I0605 00:32:45.014951 7 log.go:172] (0xc001750460) (3) Data frame handling I0605 00:32:45.015181 7 log.go:172] (0xc0029e6790) Data frame received for 5 I0605 00:32:45.015205 7 log.go:172] (0xc001750500) (5) Data frame handling I0605 00:32:45.016898 7 log.go:172] (0xc0029e6790) Data frame received for 1 I0605 00:32:45.016925 7 log.go:172] (0xc002479180) (1) Data frame handling I0605 00:32:45.016950 7 log.go:172] (0xc002479180) (1) Data frame sent I0605 00:32:45.016974 7 log.go:172] (0xc0029e6790) (0xc002479180) Stream removed, broadcasting: 1 I0605 00:32:45.016999 7 log.go:172] (0xc0029e6790) Go away received I0605 00:32:45.017242 7 log.go:172] (0xc0029e6790) (0xc002479180) Stream removed, broadcasting: 1 I0605 00:32:45.017260 7 log.go:172] (0xc0029e6790) (0xc001750460) Stream removed, broadcasting: 3 I0605 00:32:45.017277 7 log.go:172] (0xc0029e6790) (0xc001750500) Stream removed, broadcasting: 5 Jun 5 00:32:45.017: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:32:45.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-541" for this suite. • [SLOW TEST:30.489 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":192,"skipped":3178,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:32:45.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 5 00:32:45.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8721' Jun 5 00:32:48.437: INFO: stderr: "" Jun 5 00:32:48.437: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 Jun 5 00:32:48.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8721' Jun 5 00:32:55.464: INFO: stderr: "" Jun 5 00:32:55.464: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:32:55.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8721" for this suite. • [SLOW TEST:10.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":193,"skipped":3180,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:32:55.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9776/configmap-test-c6dab2c9-6023-4311-92f8-d84920fa810f STEP: Creating a pod to test consume configMaps Jun 5 00:32:55.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42" in namespace "configmap-9776" to be "Succeeded or Failed" Jun 5 00:32:55.788: INFO: Pod "pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42": Phase="Pending", Reason="", readiness=false. Elapsed: 21.553683ms Jun 5 00:32:57.794: INFO: Pod "pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027478256s Jun 5 00:32:59.798: INFO: Pod "pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031960432s STEP: Saw pod success Jun 5 00:32:59.798: INFO: Pod "pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42" satisfied condition "Succeeded or Failed" Jun 5 00:32:59.801: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42 container env-test: STEP: delete the pod Jun 5 00:32:59.893: INFO: Waiting for pod pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42 to disappear Jun 5 00:32:59.943: INFO: Pod pod-configmaps-1a10234c-cd3c-4e51-a15e-8f5be7c11b42 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:32:59.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9776" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":194,"skipped":3201,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:32:59.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:33:00.050: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:33:06.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3481" for this suite. • [SLOW TEST:7.052 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":195,"skipped":3203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:33:07.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 00:33:08.324: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 00:33:10.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:33:12.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726913988, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:33:15.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:33:15.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:33:16.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4786" for this suite. STEP: Destroying namespace "webhook-4786-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.703 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":196,"skipped":3235,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:33:16.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 5 00:33:16.792: INFO: Waiting up to 5m0s for pod "pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d" in namespace "emptydir-8674" to be "Succeeded or Failed" Jun 5 00:33:16.810: INFO: Pod "pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.328733ms Jun 5 00:33:18.814: INFO: Pod "pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02164136s Jun 5 00:33:20.865: INFO: Pod "pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073149524s Jun 5 00:33:22.997: INFO: Pod "pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205050753s STEP: Saw pod success Jun 5 00:33:22.997: INFO: Pod "pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d" satisfied condition "Succeeded or Failed" Jun 5 00:33:23.000: INFO: Trying to get logs from node latest-worker2 pod pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d container test-container: STEP: delete the pod Jun 5 00:33:23.162: INFO: Waiting for pod pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d to disappear Jun 5 00:33:23.166: INFO: Pod pod-7bf82ad4-d5be-4112-a3e1-2d4f9892e71d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:33:23.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8674" for this suite. • [SLOW TEST:6.464 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3244,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:33:23.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 5 00:33:27.844: INFO: Successfully updated pod "annotationupdate699a5a67-6c2b-46aa-9c38-ad1ab8bb51a2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:33:31.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5853" for this suite. • [SLOW TEST:8.722 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:33:31.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:33:31.976: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 5 00:33:33.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 create -f -' Jun 5 00:33:36.743: INFO: stderr: "" Jun 5 00:33:36.743: INFO: stdout: "e2e-test-crd-publish-openapi-76-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 5 00:33:36.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 delete e2e-test-crd-publish-openapi-76-crds test-cr' Jun 5 00:33:36.855: INFO: stderr: "" Jun 5 00:33:36.855: INFO: stdout: "e2e-test-crd-publish-openapi-76-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 5 00:33:36.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 apply -f -' Jun 5 00:33:38.283: INFO: stderr: "" Jun 5 00:33:38.283: INFO: stdout: "e2e-test-crd-publish-openapi-76-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 5 00:33:38.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 delete e2e-test-crd-publish-openapi-76-crds test-cr' Jun 5 00:33:38.505: INFO: stderr: "" Jun 5 00:33:38.505: INFO: stdout: "e2e-test-crd-publish-openapi-76-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 5 00:33:38.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-76-crds' Jun 5 00:33:38.726: INFO: stderr: "" Jun 5 00:33:38.726: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-76-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:33:40.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6609" for this suite. • [SLOW TEST:8.767 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":199,"skipped":3302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:33:40.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 5 00:33:40.745: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341754 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:33:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:33:40.745: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341754 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:33:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 5 00:33:50.755: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341793 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:33:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:33:50.755: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341793 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:33:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 5 00:34:00.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341823 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:34:00.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341823 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 5 00:34:10.773: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341853 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:34:10.773: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-a e01b642b-7131-42df-adcd-086b5939f1ac 10341853 0 2020-06-05 00:33:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 5 00:34:20.782: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-b 153a3515-8529-4864-b7e5-f03b5ba0d32d 10341884 0 2020-06-05 00:34:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:34:20.782: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-b 153a3515-8529-4864-b7e5-f03b5ba0d32d 10341884 0 2020-06-05 00:34:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 5 00:34:30.790: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-b 153a3515-8529-4864-b7e5-f03b5ba0d32d 10341915 0 2020-06-05 00:34:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 5 00:34:30.790: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5624 /api/v1/namespaces/watch-5624/configmaps/e2e-watch-test-configmap-b 153a3515-8529-4864-b7e5-f03b5ba0d32d 10341915 0 2020-06-05 00:34:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-05 00:34:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:34:40.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5624" for this suite. • [SLOW TEST:60.138 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":200,"skipped":3327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:34:40.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Jun 5 00:34:40.906: INFO: Waiting up to 5m0s for pod "pod-c31c89af-de45-438e-b5a4-6cd793859cfa" in namespace "emptydir-3080" to be "Succeeded or Failed" Jun 5 00:34:40.910: INFO: Pod "pod-c31c89af-de45-438e-b5a4-6cd793859cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.326718ms Jun 5 00:34:42.914: INFO: Pod "pod-c31c89af-de45-438e-b5a4-6cd793859cfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007448844s Jun 5 00:34:44.918: INFO: Pod "pod-c31c89af-de45-438e-b5a4-6cd793859cfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011768731s STEP: Saw pod success Jun 5 00:34:44.918: INFO: Pod "pod-c31c89af-de45-438e-b5a4-6cd793859cfa" satisfied condition "Succeeded or Failed" Jun 5 00:34:44.921: INFO: Trying to get logs from node latest-worker2 pod pod-c31c89af-de45-438e-b5a4-6cd793859cfa container test-container: STEP: delete the pod Jun 5 00:34:45.110: INFO: Waiting for pod pod-c31c89af-de45-438e-b5a4-6cd793859cfa to disappear Jun 5 00:34:45.125: INFO: Pod pod-c31c89af-de45-438e-b5a4-6cd793859cfa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:34:45.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3080" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":201,"skipped":3359,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:34:45.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 5 00:34:45.184: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 5 00:34:54.819: INFO: >>> kubeConfig: /root/.kube/config Jun 5 00:34:57.776: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:35:08.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1617" for this suite. • [SLOW TEST:23.361 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":202,"skipped":3368,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:35:08.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-809184dc-08a4-4bf5-8129-993e24b9add4 in namespace container-probe-7645 Jun 5 00:35:12.619: INFO: Started pod test-webserver-809184dc-08a4-4bf5-8129-993e24b9add4 in namespace container-probe-7645 STEP: checking the pod's current state and verifying that restartCount is present Jun 5 00:35:12.622: INFO: Initial restart count of pod test-webserver-809184dc-08a4-4bf5-8129-993e24b9add4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:39:13.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7645" for this suite. • [SLOW TEST:245.035 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3373,"failed":0} [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:39:13.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 5 00:39:13.605: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:39:21.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2607" for this suite. • [SLOW TEST:7.789 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3373,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:39:21.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 5 00:39:29.515: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 5 00:39:29.543: INFO: Pod pod-with-poststart-http-hook still exists Jun 5 00:39:31.543: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 5 00:39:31.548: INFO: Pod pod-with-poststart-http-hook still exists Jun 5 00:39:33.543: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 5 00:39:33.549: INFO: Pod pod-with-poststart-http-hook still exists Jun 5 00:39:35.543: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 5 00:39:35.547: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:39:35.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6179" for this suite. • [SLOW TEST:14.240 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3382,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:39:35.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:39:35.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 5 00:39:36.269: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-05T00:39:36Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-05T00:39:36Z]] name:name1 resourceVersion:10342950 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a72022b5-d80c-4166-9a64-534d80a14bc6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 5 00:39:46.275: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-05T00:39:46Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-05T00:39:46Z]] name:name2 resourceVersion:10342998 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:625fa1bb-cf6c-458a-bbc1-47d0841b93e8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 5 00:39:56.284: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-05T00:39:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-05T00:39:56Z]] name:name1 resourceVersion:10343028 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a72022b5-d80c-4166-9a64-534d80a14bc6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 5 00:40:06.290: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-05T00:39:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-05T00:40:06Z]] name:name2 resourceVersion:10343058 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:625fa1bb-cf6c-458a-bbc1-47d0841b93e8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 5 00:40:16.299: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-05T00:39:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-05T00:39:56Z]] name:name1 resourceVersion:10343086 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a72022b5-d80c-4166-9a64-534d80a14bc6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 5 00:40:26.309: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-05T00:39:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-05T00:40:06Z]] name:name2 resourceVersion:10343117 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:625fa1bb-cf6c-458a-bbc1-47d0841b93e8] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:40:36.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4823" for this suite. • [SLOW TEST:61.291 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":206,"skipped":3384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:40:36.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7644 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7644;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7644 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7644;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7644.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7644.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7644.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7644.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7644.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7644.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 209.58.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.58.209_udp@PTR;check="$$(dig +tcp +noall +answer +search 209.58.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.58.209_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7644 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7644;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7644 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7644;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7644.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7644.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7644.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7644.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7644.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7644.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7644.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7644.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 209.58.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.58.209_udp@PTR;check="$$(dig +tcp +noall +answer +search 209.58.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.58.209_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:40:43.148: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.163: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.167: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.171: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.183: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.186: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.206: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.209: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.212: INFO: Unable to read jessie_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.218: INFO: Unable to read jessie_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.223: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.226: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:43.263: INFO: Lookups using dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7644 wheezy_tcp@dns-test-service.dns-7644 wheezy_udp@dns-test-service.dns-7644.svc wheezy_tcp@dns-test-service.dns-7644.svc wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7644 jessie_tcp@dns-test-service.dns-7644 jessie_udp@dns-test-service.dns-7644.svc jessie_tcp@dns-test-service.dns-7644.svc jessie_udp@_http._tcp.dns-test-service.dns-7644.svc jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc] Jun 5 00:40:48.268: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.273: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.286: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.288: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.311: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.335: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.338: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.340: INFO: Unable to read jessie_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.346: INFO: Unable to read jessie_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.349: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.372: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.376: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:48.395: INFO: Lookups using dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7644 wheezy_tcp@dns-test-service.dns-7644 wheezy_udp@dns-test-service.dns-7644.svc wheezy_tcp@dns-test-service.dns-7644.svc wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7644 jessie_tcp@dns-test-service.dns-7644 jessie_udp@dns-test-service.dns-7644.svc jessie_tcp@dns-test-service.dns-7644.svc jessie_udp@_http._tcp.dns-test-service.dns-7644.svc jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc] Jun 5 00:40:53.268: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.272: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.276: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.286: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.290: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.293: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.319: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.322: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.325: INFO: Unable to read jessie_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.329: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.333: INFO: Unable to read jessie_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.338: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.341: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.344: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:53.360: INFO: Lookups using dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7644 wheezy_tcp@dns-test-service.dns-7644 wheezy_udp@dns-test-service.dns-7644.svc wheezy_tcp@dns-test-service.dns-7644.svc wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7644 jessie_tcp@dns-test-service.dns-7644 jessie_udp@dns-test-service.dns-7644.svc jessie_tcp@dns-test-service.dns-7644.svc jessie_udp@_http._tcp.dns-test-service.dns-7644.svc jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc] Jun 5 00:40:58.267: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.271: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.277: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.279: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.306: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.310: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.314: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.343: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.346: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.349: INFO: Unable to read jessie_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.352: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.355: INFO: Unable to read jessie_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.358: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.361: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.364: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:40:58.384: INFO: Lookups using dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7644 wheezy_tcp@dns-test-service.dns-7644 wheezy_udp@dns-test-service.dns-7644.svc wheezy_tcp@dns-test-service.dns-7644.svc wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7644 jessie_tcp@dns-test-service.dns-7644 jessie_udp@dns-test-service.dns-7644.svc jessie_tcp@dns-test-service.dns-7644.svc jessie_udp@_http._tcp.dns-test-service.dns-7644.svc jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc] Jun 5 00:41:03.269: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.274: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.278: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.281: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.285: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.289: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.292: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.296: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.318: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.321: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.324: INFO: Unable to read jessie_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.328: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.332: INFO: Unable to read jessie_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.335: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.338: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.341: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:03.361: INFO: Lookups using dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7644 wheezy_tcp@dns-test-service.dns-7644 wheezy_udp@dns-test-service.dns-7644.svc wheezy_tcp@dns-test-service.dns-7644.svc wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7644 jessie_tcp@dns-test-service.dns-7644 jessie_udp@dns-test-service.dns-7644.svc jessie_tcp@dns-test-service.dns-7644.svc jessie_udp@_http._tcp.dns-test-service.dns-7644.svc jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc] Jun 5 00:41:08.269: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.273: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.276: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.286: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.290: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.294: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.319: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.322: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.326: INFO: Unable to read jessie_udp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.330: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644 from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.336: INFO: Unable to read jessie_udp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.341: INFO: Unable to read jessie_tcp@dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.343: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.346: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc from pod dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1: the server could not find the requested resource (get pods dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1) Jun 5 00:41:08.361: INFO: Lookups using dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7644 wheezy_tcp@dns-test-service.dns-7644 wheezy_udp@dns-test-service.dns-7644.svc wheezy_tcp@dns-test-service.dns-7644.svc wheezy_udp@_http._tcp.dns-test-service.dns-7644.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7644.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7644 jessie_tcp@dns-test-service.dns-7644 jessie_udp@dns-test-service.dns-7644.svc jessie_tcp@dns-test-service.dns-7644.svc jessie_udp@_http._tcp.dns-test-service.dns-7644.svc jessie_tcp@_http._tcp.dns-test-service.dns-7644.svc] Jun 5 00:41:13.347: INFO: DNS probes using dns-7644/dns-test-dd0a3eac-339d-436f-b293-2990e5808fe1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:41:14.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7644" for this suite. • [SLOW TEST:37.254 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":207,"skipped":3410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:41:14.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Jun 5 00:41:14.237: INFO: Waiting up to 5m0s for pod "var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c" in namespace "var-expansion-1440" to be "Succeeded or Failed" Jun 5 00:41:14.253: INFO: Pod "var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.413461ms Jun 5 00:41:16.858: INFO: Pod "var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620804271s Jun 5 00:41:18.862: INFO: Pod "var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c": Phase="Running", Reason="", readiness=true. Elapsed: 4.624699139s Jun 5 00:41:20.865: INFO: Pod "var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.627984304s STEP: Saw pod success Jun 5 00:41:20.865: INFO: Pod "var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c" satisfied condition "Succeeded or Failed" Jun 5 00:41:20.868: INFO: Trying to get logs from node latest-worker pod var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c container dapi-container: STEP: delete the pod Jun 5 00:41:20.960: INFO: Waiting for pod var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c to disappear Jun 5 00:41:20.966: INFO: Pod var-expansion-087b0d39-97bc-4475-80a7-fac8ca80195c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:41:20.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1440" for this suite. • [SLOW TEST:6.899 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":208,"skipped":3435,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:41:21.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 5 00:41:26.134: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:41:26.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1577" for this suite. • [SLOW TEST:5.255 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":209,"skipped":3439,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:41:26.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-515f2dfb-4afc-4730-877e-11f8a7fd8bf8 STEP: Creating a pod to test consume secrets Jun 5 00:41:26.375: INFO: Waiting up to 5m0s for pod "pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80" in namespace "secrets-8333" to be "Succeeded or Failed" Jun 5 00:41:26.389: INFO: Pod "pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80": Phase="Pending", Reason="", readiness=false. Elapsed: 14.7028ms Jun 5 00:41:28.393: INFO: Pod "pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017935739s Jun 5 00:41:30.399: INFO: Pod "pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02391063s Jun 5 00:41:32.409: INFO: Pod "pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034727761s STEP: Saw pod success Jun 5 00:41:32.409: INFO: Pod "pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80" satisfied condition "Succeeded or Failed" Jun 5 00:41:32.434: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80 container secret-volume-test: STEP: delete the pod Jun 5 00:41:32.502: INFO: Waiting for pod pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80 to disappear Jun 5 00:41:32.570: INFO: Pod pod-secrets-f22c1870-f582-41bc-a64d-407841bc6c80 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:41:32.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8333" for this suite. • [SLOW TEST:6.319 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3451,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:41:32.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:43:32.862: INFO: Deleting pod "var-expansion-01402bdf-7d19-49eb-b267-15878ac5abbc" in namespace "var-expansion-923" Jun 5 00:43:32.868: INFO: Wait up to 5m0s for pod "var-expansion-01402bdf-7d19-49eb-b267-15878ac5abbc" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:43:36.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-923" for this suite. • [SLOW TEST:124.326 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":211,"skipped":3462,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:43:36.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:43:37.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1557" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":212,"skipped":3468,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:43:37.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod Jun 5 00:43:37.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7012' Jun 5 00:43:40.011: INFO: stderr: "" Jun 5 00:43:40.011: INFO: stdout: "pod/pause created\n" Jun 5 00:43:40.011: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 5 00:43:40.011: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7012" to be "running and ready" Jun 5 00:43:40.024: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.789998ms Jun 5 00:43:42.030: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019121941s Jun 5 00:43:44.034: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.023582364s Jun 5 00:43:44.034: INFO: Pod "pause" satisfied condition "running and ready" Jun 5 00:43:44.034: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Jun 5 00:43:44.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7012' Jun 5 00:43:44.139: INFO: stderr: "" Jun 5 00:43:44.139: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 5 00:43:44.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7012' Jun 5 00:43:44.231: INFO: stderr: "" Jun 5 00:43:44.231: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 5 00:43:44.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7012' Jun 5 00:43:44.348: INFO: stderr: "" Jun 5 00:43:44.348: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 5 00:43:44.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7012' Jun 5 00:43:44.444: INFO: stderr: "" Jun 5 00:43:44.444: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources Jun 5 00:43:44.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7012' Jun 5 00:43:44.576: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:43:44.576: INFO: stdout: "pod \"pause\" force deleted\n" Jun 5 00:43:44.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7012' Jun 5 00:43:44.689: INFO: stderr: "No resources found in kubectl-7012 namespace.\n" Jun 5 00:43:44.689: INFO: stdout: "" Jun 5 00:43:44.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7012 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 5 00:43:44.788: INFO: stderr: "" Jun 5 00:43:44.788: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:43:44.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7012" for this suite. • [SLOW TEST:7.680 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":213,"skipped":3480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:43:44.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 00:43:48.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 00:43:50.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914628, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914628, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914628, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914627, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:43:53.223: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:43:53.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5918" for this suite. STEP: Destroying namespace "webhook-5918-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.758 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":214,"skipped":3510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:43:53.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:43:53.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-870" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":215,"skipped":3559,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:43:53.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2741 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 5 00:43:53.901: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 5 00:43:53.979: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 5 00:43:56.122: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 5 00:43:57.984: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 5 00:43:59.983: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:44:01.984: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:44:03.983: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:44:05.984: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:44:07.983: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 5 00:44:09.991: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 5 00:44:09.996: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 5 00:44:14.052: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.212:8080/dial?request=hostname&protocol=udp&host=10.244.1.211&port=8081&tries=1'] Namespace:pod-network-test-2741 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:44:14.052: INFO: >>> kubeConfig: /root/.kube/config I0605 00:44:14.084355 7 log.go:172] (0xc002e566e0) (0xc002478820) Create stream I0605 00:44:14.084383 7 log.go:172] (0xc002e566e0) (0xc002478820) Stream added, broadcasting: 1 I0605 00:44:14.086444 7 log.go:172] (0xc002e566e0) Reply frame received for 1 I0605 00:44:14.086487 7 log.go:172] (0xc002e566e0) (0xc001046fa0) Create stream I0605 00:44:14.086507 7 log.go:172] (0xc002e566e0) (0xc001046fa0) Stream added, broadcasting: 3 I0605 00:44:14.087705 7 log.go:172] (0xc002e566e0) Reply frame received for 3 I0605 00:44:14.087744 7 log.go:172] (0xc002e566e0) (0xc002a63d60) Create stream I0605 00:44:14.087758 7 log.go:172] (0xc002e566e0) (0xc002a63d60) Stream added, broadcasting: 5 I0605 00:44:14.088714 7 log.go:172] (0xc002e566e0) Reply frame received for 5 I0605 00:44:14.364195 7 log.go:172] (0xc002e566e0) Data frame received for 3 I0605 00:44:14.364242 7 log.go:172] (0xc001046fa0) (3) Data frame handling I0605 00:44:14.364280 7 log.go:172] (0xc001046fa0) (3) Data frame sent I0605 00:44:14.364741 7 log.go:172] (0xc002e566e0) Data frame received for 3 I0605 00:44:14.364773 7 log.go:172] (0xc001046fa0) (3) Data frame handling I0605 00:44:14.364808 7 log.go:172] (0xc002e566e0) Data frame received for 5 I0605 00:44:14.364839 7 log.go:172] (0xc002a63d60) (5) Data frame handling I0605 00:44:14.366867 7 log.go:172] (0xc002e566e0) Data frame received for 1 I0605 00:44:14.366885 7 log.go:172] (0xc002478820) (1) Data frame handling I0605 00:44:14.366892 7 log.go:172] (0xc002478820) (1) Data frame sent I0605 00:44:14.366900 7 log.go:172] (0xc002e566e0) (0xc002478820) Stream removed, broadcasting: 1 I0605 00:44:14.366993 7 log.go:172] (0xc002e566e0) (0xc002478820) Stream removed, broadcasting: 1 I0605 00:44:14.367009 7 log.go:172] (0xc002e566e0) (0xc001046fa0) Stream removed, broadcasting: 3 I0605 00:44:14.367077 7 log.go:172] (0xc002e566e0) Go away received I0605 00:44:14.367191 7 log.go:172] (0xc002e566e0) (0xc002a63d60) Stream removed, broadcasting: 5 Jun 5 00:44:14.367: INFO: Waiting for responses: map[] Jun 5 00:44:14.370: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.212:8080/dial?request=hostname&protocol=udp&host=10.244.2.122&port=8081&tries=1'] Namespace:pod-network-test-2741 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 5 00:44:14.370: INFO: >>> kubeConfig: /root/.kube/config I0605 00:44:14.405917 7 log.go:172] (0xc0029e68f0) (0xc001047c20) Create stream I0605 00:44:14.405946 7 log.go:172] (0xc0029e68f0) (0xc001047c20) Stream added, broadcasting: 1 I0605 00:44:14.407799 7 log.go:172] (0xc0029e68f0) Reply frame received for 1 I0605 00:44:14.407832 7 log.go:172] (0xc0029e68f0) (0xc0013f4960) Create stream I0605 00:44:14.407843 7 log.go:172] (0xc0029e68f0) (0xc0013f4960) Stream added, broadcasting: 3 I0605 00:44:14.408990 7 log.go:172] (0xc0029e68f0) Reply frame received for 3 I0605 00:44:14.409027 7 log.go:172] (0xc0029e68f0) (0xc0013f4b40) Create stream I0605 00:44:14.409041 7 log.go:172] (0xc0029e68f0) (0xc0013f4b40) Stream added, broadcasting: 5 I0605 00:44:14.410498 7 log.go:172] (0xc0029e68f0) Reply frame received for 5 I0605 00:44:14.479715 7 log.go:172] (0xc0029e68f0) Data frame received for 3 I0605 00:44:14.479736 7 log.go:172] (0xc0013f4960) (3) Data frame handling I0605 00:44:14.479748 7 log.go:172] (0xc0013f4960) (3) Data frame sent I0605 00:44:14.480523 7 log.go:172] (0xc0029e68f0) Data frame received for 5 I0605 00:44:14.480537 7 log.go:172] (0xc0013f4b40) (5) Data frame handling I0605 00:44:14.480569 7 log.go:172] (0xc0029e68f0) Data frame received for 3 I0605 00:44:14.480589 7 log.go:172] (0xc0013f4960) (3) Data frame handling I0605 00:44:14.482287 7 log.go:172] (0xc0029e68f0) Data frame received for 1 I0605 00:44:14.482315 7 log.go:172] (0xc001047c20) (1) Data frame handling I0605 00:44:14.482326 7 log.go:172] (0xc001047c20) (1) Data frame sent I0605 00:44:14.482344 7 log.go:172] (0xc0029e68f0) (0xc001047c20) Stream removed, broadcasting: 1 I0605 00:44:14.482365 7 log.go:172] (0xc0029e68f0) Go away received I0605 00:44:14.482503 7 log.go:172] (0xc0029e68f0) (0xc001047c20) Stream removed, broadcasting: 1 I0605 00:44:14.482523 7 log.go:172] (0xc0029e68f0) (0xc0013f4960) Stream removed, broadcasting: 3 I0605 00:44:14.482533 7 log.go:172] (0xc0029e68f0) (0xc0013f4b40) Stream removed, broadcasting: 5 Jun 5 00:44:14.482: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:44:14.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2741" for this suite. • [SLOW TEST:20.666 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3562,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:44:14.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:44:14.666: INFO: Create a RollingUpdate DaemonSet Jun 5 00:44:14.669: INFO: Check that daemon pods launch on every node of the cluster Jun 5 00:44:14.706: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:14.739: INFO: Number of nodes with available pods: 0 Jun 5 00:44:14.739: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:44:15.745: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:15.750: INFO: Number of nodes with available pods: 0 Jun 5 00:44:15.750: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:44:16.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:16.893: INFO: Number of nodes with available pods: 0 Jun 5 00:44:16.893: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:44:17.745: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:17.749: INFO: Number of nodes with available pods: 0 Jun 5 00:44:17.750: INFO: Node latest-worker is running more than one daemon pod Jun 5 00:44:18.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:18.747: INFO: Number of nodes with available pods: 1 Jun 5 00:44:18.747: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:44:19.752: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:19.768: INFO: Number of nodes with available pods: 2 Jun 5 00:44:19.768: INFO: Number of running nodes: 2, number of available pods: 2 Jun 5 00:44:19.768: INFO: Update the DaemonSet to trigger a rollout Jun 5 00:44:19.804: INFO: Updating DaemonSet daemon-set Jun 5 00:44:35.846: INFO: Roll back the DaemonSet before rollout is complete Jun 5 00:44:35.853: INFO: Updating DaemonSet daemon-set Jun 5 00:44:35.853: INFO: Make sure DaemonSet rollback is complete Jun 5 00:44:35.863: INFO: Wrong image for pod: daemon-set-vgfh2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 5 00:44:35.863: INFO: Pod daemon-set-vgfh2 is not available Jun 5 00:44:35.869: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:36.878: INFO: Wrong image for pod: daemon-set-vgfh2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 5 00:44:36.878: INFO: Pod daemon-set-vgfh2 is not available Jun 5 00:44:36.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 00:44:37.872: INFO: Pod daemon-set-vs7kn is not available Jun 5 00:44:37.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4268, will wait for the garbage collector to delete the pods Jun 5 00:44:37.966: INFO: Deleting DaemonSet.extensions daemon-set took: 32.03557ms Jun 5 00:44:38.267: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.529322ms Jun 5 00:44:45.272: INFO: Number of nodes with available pods: 0 Jun 5 00:44:45.272: INFO: Number of running nodes: 0, number of available pods: 0 Jun 5 00:44:45.274: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4268/daemonsets","resourceVersion":"10344330"},"items":null} Jun 5 00:44:45.276: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4268/pods","resourceVersion":"10344330"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:44:45.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4268" for this suite. • [SLOW TEST:30.801 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":217,"skipped":3568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:44:45.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Jun 5 00:44:50.560: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5631 pod-service-account-ff441306-8de7-49bd-87ed-c371f2e15033 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 5 00:44:50.831: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5631 pod-service-account-ff441306-8de7-49bd-87ed-c371f2e15033 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 5 00:44:51.034: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5631 pod-service-account-ff441306-8de7-49bd-87ed-c371f2e15033 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:44:51.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5631" for this suite. • [SLOW TEST:5.976 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":218,"skipped":3607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:44:51.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8470.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8470.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8470.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8470.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8470.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 42.101.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.101.42_udp@PTR;check="$$(dig +tcp +noall +answer +search 42.101.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.101.42_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8470.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8470.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8470.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8470.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8470.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8470.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 42.101.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.101.42_udp@PTR;check="$$(dig +tcp +noall +answer +search 42.101.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.101.42_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:44:57.501: INFO: Unable to read wheezy_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.504: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.506: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.509: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.527: INFO: Unable to read jessie_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.530: INFO: Unable to read jessie_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.532: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.534: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:44:57.550: INFO: Lookups using dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2 failed for: [wheezy_udp@dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_udp@dns-test-service.dns-8470.svc.cluster.local jessie_tcp@dns-test-service.dns-8470.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local] Jun 5 00:45:02.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.560: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.564: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.567: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.592: INFO: Unable to read jessie_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.598: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.601: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:02.622: INFO: Lookups using dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2 failed for: [wheezy_udp@dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_udp@dns-test-service.dns-8470.svc.cluster.local jessie_tcp@dns-test-service.dns-8470.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local] Jun 5 00:45:07.555: INFO: Unable to read wheezy_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.558: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.561: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.564: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.584: INFO: Unable to read jessie_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.587: INFO: Unable to read jessie_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.590: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.594: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:07.611: INFO: Lookups using dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2 failed for: [wheezy_udp@dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_udp@dns-test-service.dns-8470.svc.cluster.local jessie_tcp@dns-test-service.dns-8470.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local] Jun 5 00:45:12.555: INFO: Unable to read wheezy_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.558: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.561: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.564: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.584: INFO: Unable to read jessie_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.587: INFO: Unable to read jessie_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.591: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.593: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:12.613: INFO: Lookups using dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2 failed for: [wheezy_udp@dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_udp@dns-test-service.dns-8470.svc.cluster.local jessie_tcp@dns-test-service.dns-8470.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local] Jun 5 00:45:17.557: INFO: Unable to read wheezy_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.561: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.563: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.566: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.589: INFO: Unable to read jessie_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.593: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.596: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:17.612: INFO: Lookups using dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2 failed for: [wheezy_udp@dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_udp@dns-test-service.dns-8470.svc.cluster.local jessie_tcp@dns-test-service.dns-8470.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local] Jun 5 00:45:22.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.560: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.563: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.567: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.590: INFO: Unable to read jessie_udp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.596: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.599: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local from pod dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2: the server could not find the requested resource (get pods dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2) Jun 5 00:45:22.622: INFO: Lookups using dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2 failed for: [wheezy_udp@dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@dns-test-service.dns-8470.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_udp@dns-test-service.dns-8470.svc.cluster.local jessie_tcp@dns-test-service.dns-8470.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8470.svc.cluster.local] Jun 5 00:45:27.605: INFO: DNS probes using dns-8470/dns-test-e18a849d-39e4-43ac-957d-ca63f33a9ed2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:45:28.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8470" for this suite. • [SLOW TEST:37.105 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":219,"skipped":3648,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:45:28.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:45:28.451: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 5 00:45:28.493: INFO: Number of nodes with available pods: 0 Jun 5 00:45:28.494: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 5 00:45:28.563: INFO: Number of nodes with available pods: 0 Jun 5 00:45:28.563: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:29.566: INFO: Number of nodes with available pods: 0 Jun 5 00:45:29.566: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:30.682: INFO: Number of nodes with available pods: 0 Jun 5 00:45:30.682: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:31.590: INFO: Number of nodes with available pods: 0 Jun 5 00:45:31.590: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:32.579: INFO: Number of nodes with available pods: 1 Jun 5 00:45:32.579: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 5 00:45:32.611: INFO: Number of nodes with available pods: 1 Jun 5 00:45:32.611: INFO: Number of running nodes: 0, number of available pods: 1 Jun 5 00:45:33.633: INFO: Number of nodes with available pods: 0 Jun 5 00:45:33.633: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 5 00:45:33.746: INFO: Number of nodes with available pods: 0 Jun 5 00:45:33.746: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:34.842: INFO: Number of nodes with available pods: 0 Jun 5 00:45:34.842: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:35.751: INFO: Number of nodes with available pods: 0 Jun 5 00:45:35.751: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:36.751: INFO: Number of nodes with available pods: 0 Jun 5 00:45:36.751: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:37.750: INFO: Number of nodes with available pods: 0 Jun 5 00:45:37.750: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:38.751: INFO: Number of nodes with available pods: 0 Jun 5 00:45:38.751: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:39.752: INFO: Number of nodes with available pods: 0 Jun 5 00:45:39.752: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:40.754: INFO: Number of nodes with available pods: 0 Jun 5 00:45:40.754: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:41.750: INFO: Number of nodes with available pods: 0 Jun 5 00:45:41.750: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:42.751: INFO: Number of nodes with available pods: 0 Jun 5 00:45:42.751: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:43.770: INFO: Number of nodes with available pods: 0 Jun 5 00:45:43.770: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:44.751: INFO: Number of nodes with available pods: 0 Jun 5 00:45:44.751: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:45.763: INFO: Number of nodes with available pods: 0 Jun 5 00:45:45.763: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:46.750: INFO: Number of nodes with available pods: 0 Jun 5 00:45:46.750: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:47.794: INFO: Number of nodes with available pods: 0 Jun 5 00:45:47.794: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 00:45:48.764: INFO: Number of nodes with available pods: 1 Jun 5 00:45:48.764: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9819, will wait for the garbage collector to delete the pods Jun 5 00:45:48.828: INFO: Deleting DaemonSet.extensions daemon-set took: 6.523596ms Jun 5 00:45:49.128: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.288323ms Jun 5 00:45:55.332: INFO: Number of nodes with available pods: 0 Jun 5 00:45:55.332: INFO: Number of running nodes: 0, number of available pods: 0 Jun 5 00:45:55.335: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9819/daemonsets","resourceVersion":"10344714"},"items":null} Jun 5 00:45:55.338: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9819/pods","resourceVersion":"10344714"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:45:55.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9819" for this suite. • [SLOW TEST:27.004 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":220,"skipped":3655,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:45:55.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 5 00:45:55.470: INFO: Waiting up to 5m0s for pod "pod-076735d2-45b2-4b43-90dc-51f7bfe05dda" in namespace "emptydir-8740" to be "Succeeded or Failed" Jun 5 00:45:55.487: INFO: Pod "pod-076735d2-45b2-4b43-90dc-51f7bfe05dda": Phase="Pending", Reason="", readiness=false. Elapsed: 16.983342ms Jun 5 00:45:57.490: INFO: Pod "pod-076735d2-45b2-4b43-90dc-51f7bfe05dda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020113206s Jun 5 00:45:59.495: INFO: Pod "pod-076735d2-45b2-4b43-90dc-51f7bfe05dda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024781513s STEP: Saw pod success Jun 5 00:45:59.495: INFO: Pod "pod-076735d2-45b2-4b43-90dc-51f7bfe05dda" satisfied condition "Succeeded or Failed" Jun 5 00:45:59.498: INFO: Trying to get logs from node latest-worker pod pod-076735d2-45b2-4b43-90dc-51f7bfe05dda container test-container: STEP: delete the pod Jun 5 00:45:59.554: INFO: Waiting for pod pod-076735d2-45b2-4b43-90dc-51f7bfe05dda to disappear Jun 5 00:45:59.568: INFO: Pod pod-076735d2-45b2-4b43-90dc-51f7bfe05dda no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:45:59.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8740" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3672,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:45:59.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 00:46:00.203: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 00:46:02.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914760, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914760, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914760, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914760, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:46:05.326: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:46:05.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5055" for this suite. STEP: Destroying namespace "webhook-5055-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.468 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":222,"skipped":3674,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:46:06.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 00:46:06.733: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 00:46:08.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914766, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914766, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914767, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914766, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:46:12.019: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:46:12.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4598" for this suite. STEP: Destroying namespace "webhook-4598-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.370 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":223,"skipped":3689,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:46:12.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-3e5b8a87-743f-4064-9a5d-43d1f90f51db STEP: Creating a pod to test consume secrets Jun 5 00:46:12.876: INFO: Waiting up to 5m0s for pod "pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1" in namespace "secrets-7796" to be "Succeeded or Failed" Jun 5 00:46:12.920: INFO: Pod "pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.605872ms Jun 5 00:46:14.950: INFO: Pod "pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074175082s Jun 5 00:46:16.956: INFO: Pod "pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080042712s STEP: Saw pod success Jun 5 00:46:16.956: INFO: Pod "pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1" satisfied condition "Succeeded or Failed" Jun 5 00:46:16.959: INFO: Trying to get logs from node latest-worker pod pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1 container secret-volume-test: STEP: delete the pod Jun 5 00:46:17.022: INFO: Waiting for pod pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1 to disappear Jun 5 00:46:17.051: INFO: Pod pod-secrets-6c813573-ecd2-4756-bd47-e8e20569ebc1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:46:17.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7796" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":224,"skipped":3711,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:46:17.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 5 00:46:21.685: INFO: Successfully updated pod "labelsupdate05b7e748-4d04-4d47-aa3f-326c4066af4e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:46:25.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3627" for this suite. • [SLOW TEST:8.706 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3716,"failed":0} S ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:46:25.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1185 STEP: creating service affinity-nodeport-transition in namespace services-1185 STEP: creating replication controller affinity-nodeport-transition in namespace services-1185 I0605 00:46:25.904482 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1185, replica count: 3 I0605 00:46:28.954873 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:46:31.955122 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:46:34.955375 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:46:34.967: INFO: Creating new exec pod Jun 5 00:46:40.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1185 execpod-affinityf4949 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jun 5 00:46:40.264: INFO: stderr: "I0605 00:46:40.154658 2635 log.go:172] (0xc00003a420) (0xc0006a3b80) Create stream\nI0605 00:46:40.154742 2635 log.go:172] (0xc00003a420) (0xc0006a3b80) Stream added, broadcasting: 1\nI0605 00:46:40.157421 2635 log.go:172] (0xc00003a420) Reply frame received for 1\nI0605 00:46:40.157470 2635 log.go:172] (0xc00003a420) (0xc0004ead20) Create stream\nI0605 00:46:40.157482 2635 log.go:172] (0xc00003a420) (0xc0004ead20) Stream added, broadcasting: 3\nI0605 00:46:40.159236 2635 log.go:172] (0xc00003a420) Reply frame received for 3\nI0605 00:46:40.159271 2635 log.go:172] (0xc00003a420) (0xc0004e4460) Create stream\nI0605 00:46:40.159282 2635 log.go:172] (0xc00003a420) (0xc0004e4460) Stream added, broadcasting: 5\nI0605 00:46:40.160920 2635 log.go:172] (0xc00003a420) Reply frame received for 5\nI0605 00:46:40.242654 2635 log.go:172] (0xc00003a420) Data frame received for 5\nI0605 00:46:40.242689 2635 log.go:172] (0xc0004e4460) (5) Data frame handling\nI0605 00:46:40.242709 2635 log.go:172] (0xc0004e4460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0605 00:46:40.255468 2635 log.go:172] (0xc00003a420) Data frame received for 5\nI0605 00:46:40.255487 2635 log.go:172] (0xc0004e4460) (5) Data frame handling\nI0605 00:46:40.255493 2635 log.go:172] (0xc0004e4460) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0605 00:46:40.255706 2635 log.go:172] (0xc00003a420) Data frame received for 5\nI0605 00:46:40.255723 2635 log.go:172] (0xc0004e4460) (5) Data frame handling\nI0605 00:46:40.255766 2635 log.go:172] (0xc00003a420) Data frame received for 3\nI0605 00:46:40.255800 2635 log.go:172] (0xc0004ead20) (3) Data frame handling\nI0605 00:46:40.257760 2635 log.go:172] (0xc00003a420) Data frame received for 1\nI0605 00:46:40.257786 2635 log.go:172] (0xc0006a3b80) (1) Data frame handling\nI0605 00:46:40.257808 2635 log.go:172] (0xc0006a3b80) (1) Data frame sent\nI0605 00:46:40.257824 2635 log.go:172] (0xc00003a420) (0xc0006a3b80) Stream removed, broadcasting: 1\nI0605 00:46:40.257846 2635 log.go:172] (0xc00003a420) Go away received\nI0605 00:46:40.258169 2635 log.go:172] (0xc00003a420) (0xc0006a3b80) Stream removed, broadcasting: 1\nI0605 00:46:40.258192 2635 log.go:172] (0xc00003a420) (0xc0004ead20) Stream removed, broadcasting: 3\nI0605 00:46:40.258204 2635 log.go:172] (0xc00003a420) (0xc0004e4460) Stream removed, broadcasting: 5\n" Jun 5 00:46:40.264: INFO: stdout: "" Jun 5 00:46:40.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1185 execpod-affinityf4949 -- /bin/sh -x -c nc -zv -t -w 2 10.110.221.157 80' Jun 5 00:46:40.485: INFO: stderr: "I0605 00:46:40.403132 2656 log.go:172] (0xc00096a790) (0xc0006dea00) Create stream\nI0605 00:46:40.403188 2656 log.go:172] (0xc00096a790) (0xc0006dea00) Stream added, broadcasting: 1\nI0605 00:46:40.406405 2656 log.go:172] (0xc00096a790) Reply frame received for 1\nI0605 00:46:40.406463 2656 log.go:172] (0xc00096a790) (0xc00059c500) Create stream\nI0605 00:46:40.406482 2656 log.go:172] (0xc00096a790) (0xc00059c500) Stream added, broadcasting: 3\nI0605 00:46:40.407823 2656 log.go:172] (0xc00096a790) Reply frame received for 3\nI0605 00:46:40.407879 2656 log.go:172] (0xc00096a790) (0xc0006f4e60) Create stream\nI0605 00:46:40.407917 2656 log.go:172] (0xc00096a790) (0xc0006f4e60) Stream added, broadcasting: 5\nI0605 00:46:40.409429 2656 log.go:172] (0xc00096a790) Reply frame received for 5\nI0605 00:46:40.474921 2656 log.go:172] (0xc00096a790) Data frame received for 5\nI0605 00:46:40.474988 2656 log.go:172] (0xc0006f4e60) (5) Data frame handling\nI0605 00:46:40.475020 2656 log.go:172] (0xc0006f4e60) (5) Data frame sent\nI0605 00:46:40.475039 2656 log.go:172] (0xc00096a790) Data frame received for 5\n+ nc -zv -t -w 2 10.110.221.157 80\nConnection to 10.110.221.157 80 port [tcp/http] succeeded!\nI0605 00:46:40.475095 2656 log.go:172] (0xc00096a790) Data frame received for 3\nI0605 00:46:40.475276 2656 log.go:172] (0xc00059c500) (3) Data frame handling\nI0605 00:46:40.475308 2656 log.go:172] (0xc0006f4e60) (5) Data frame handling\nI0605 00:46:40.476632 2656 log.go:172] (0xc00096a790) Data frame received for 1\nI0605 00:46:40.476648 2656 log.go:172] (0xc0006dea00) (1) Data frame handling\nI0605 00:46:40.476659 2656 log.go:172] (0xc0006dea00) (1) Data frame sent\nI0605 00:46:40.476667 2656 log.go:172] (0xc00096a790) (0xc0006dea00) Stream removed, broadcasting: 1\nI0605 00:46:40.476674 2656 log.go:172] (0xc00096a790) Go away received\nI0605 00:46:40.477349 2656 log.go:172] (0xc00096a790) (0xc0006dea00) Stream removed, broadcasting: 1\nI0605 00:46:40.477376 2656 log.go:172] (0xc00096a790) (0xc00059c500) Stream removed, broadcasting: 3\nI0605 00:46:40.477390 2656 log.go:172] (0xc00096a790) (0xc0006f4e60) Stream removed, broadcasting: 5\n" Jun 5 00:46:40.485: INFO: stdout: "" Jun 5 00:46:40.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1185 execpod-affinityf4949 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30248' Jun 5 00:46:40.709: INFO: stderr: "I0605 00:46:40.621800 2679 log.go:172] (0xc000ac7550) (0xc000b36140) Create stream\nI0605 00:46:40.621857 2679 log.go:172] (0xc000ac7550) (0xc000b36140) Stream added, broadcasting: 1\nI0605 00:46:40.627144 2679 log.go:172] (0xc000ac7550) Reply frame received for 1\nI0605 00:46:40.627192 2679 log.go:172] (0xc000ac7550) (0xc000740000) Create stream\nI0605 00:46:40.627206 2679 log.go:172] (0xc000ac7550) (0xc000740000) Stream added, broadcasting: 3\nI0605 00:46:40.628031 2679 log.go:172] (0xc000ac7550) Reply frame received for 3\nI0605 00:46:40.628086 2679 log.go:172] (0xc000ac7550) (0xc0006f4dc0) Create stream\nI0605 00:46:40.628102 2679 log.go:172] (0xc000ac7550) (0xc0006f4dc0) Stream added, broadcasting: 5\nI0605 00:46:40.628869 2679 log.go:172] (0xc000ac7550) Reply frame received for 5\nI0605 00:46:40.702032 2679 log.go:172] (0xc000ac7550) Data frame received for 5\nI0605 00:46:40.702058 2679 log.go:172] (0xc0006f4dc0) (5) Data frame handling\nI0605 00:46:40.702066 2679 log.go:172] (0xc0006f4dc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30248\nI0605 00:46:40.702345 2679 log.go:172] (0xc000ac7550) Data frame received for 5\nI0605 00:46:40.702369 2679 log.go:172] (0xc0006f4dc0) (5) Data frame handling\nI0605 00:46:40.702381 2679 log.go:172] (0xc0006f4dc0) (5) Data frame sent\nConnection to 172.17.0.13 30248 port [tcp/30248] succeeded!\nI0605 00:46:40.702804 2679 log.go:172] (0xc000ac7550) Data frame received for 3\nI0605 00:46:40.702819 2679 log.go:172] (0xc000740000) (3) Data frame handling\nI0605 00:46:40.702876 2679 log.go:172] (0xc000ac7550) Data frame received for 5\nI0605 00:46:40.702912 2679 log.go:172] (0xc0006f4dc0) (5) Data frame handling\nI0605 00:46:40.704604 2679 log.go:172] (0xc000ac7550) Data frame received for 1\nI0605 00:46:40.704649 2679 log.go:172] (0xc000b36140) (1) Data frame handling\nI0605 00:46:40.704688 2679 log.go:172] (0xc000b36140) (1) Data frame sent\nI0605 00:46:40.704701 2679 log.go:172] (0xc000ac7550) (0xc000b36140) Stream removed, broadcasting: 1\nI0605 00:46:40.704729 2679 log.go:172] (0xc000ac7550) Go away received\nI0605 00:46:40.705414 2679 log.go:172] (0xc000ac7550) (0xc000b36140) Stream removed, broadcasting: 1\nI0605 00:46:40.705450 2679 log.go:172] (0xc000ac7550) (0xc000740000) Stream removed, broadcasting: 3\nI0605 00:46:40.705464 2679 log.go:172] (0xc000ac7550) (0xc0006f4dc0) Stream removed, broadcasting: 5\n" Jun 5 00:46:40.710: INFO: stdout: "" Jun 5 00:46:40.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1185 execpod-affinityf4949 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30248' Jun 5 00:46:40.919: INFO: stderr: "I0605 00:46:40.829684 2700 log.go:172] (0xc000ad2e70) (0xc0006a8f00) Create stream\nI0605 00:46:40.829754 2700 log.go:172] (0xc000ad2e70) (0xc0006a8f00) Stream added, broadcasting: 1\nI0605 00:46:40.836668 2700 log.go:172] (0xc000ad2e70) Reply frame received for 1\nI0605 00:46:40.836752 2700 log.go:172] (0xc000ad2e70) (0xc00054e1e0) Create stream\nI0605 00:46:40.836776 2700 log.go:172] (0xc000ad2e70) (0xc00054e1e0) Stream added, broadcasting: 3\nI0605 00:46:40.838018 2700 log.go:172] (0xc000ad2e70) Reply frame received for 3\nI0605 00:46:40.838048 2700 log.go:172] (0xc000ad2e70) (0xc0004b6d20) Create stream\nI0605 00:46:40.838057 2700 log.go:172] (0xc000ad2e70) (0xc0004b6d20) Stream added, broadcasting: 5\nI0605 00:46:40.838957 2700 log.go:172] (0xc000ad2e70) Reply frame received for 5\nI0605 00:46:40.913538 2700 log.go:172] (0xc000ad2e70) Data frame received for 5\nI0605 00:46:40.913563 2700 log.go:172] (0xc0004b6d20) (5) Data frame handling\nI0605 00:46:40.913575 2700 log.go:172] (0xc0004b6d20) (5) Data frame sent\nI0605 00:46:40.913581 2700 log.go:172] (0xc000ad2e70) Data frame received for 5\nI0605 00:46:40.913585 2700 log.go:172] (0xc0004b6d20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30248\nConnection to 172.17.0.12 30248 port [tcp/30248] succeeded!\nI0605 00:46:40.913627 2700 log.go:172] (0xc0004b6d20) (5) Data frame sent\nI0605 00:46:40.913884 2700 log.go:172] (0xc000ad2e70) Data frame received for 5\nI0605 00:46:40.913898 2700 log.go:172] (0xc0004b6d20) (5) Data frame handling\nI0605 00:46:40.914208 2700 log.go:172] (0xc000ad2e70) Data frame received for 3\nI0605 00:46:40.914221 2700 log.go:172] (0xc00054e1e0) (3) Data frame handling\nI0605 00:46:40.915939 2700 log.go:172] (0xc000ad2e70) Data frame received for 1\nI0605 00:46:40.915953 2700 log.go:172] (0xc0006a8f00) (1) Data frame handling\nI0605 00:46:40.915966 2700 log.go:172] (0xc0006a8f00) (1) Data frame sent\nI0605 00:46:40.915975 2700 log.go:172] (0xc000ad2e70) (0xc0006a8f00) Stream removed, broadcasting: 1\nI0605 00:46:40.916036 2700 log.go:172] (0xc000ad2e70) Go away received\nI0605 00:46:40.916266 2700 log.go:172] (0xc000ad2e70) (0xc0006a8f00) Stream removed, broadcasting: 1\nI0605 00:46:40.916281 2700 log.go:172] (0xc000ad2e70) (0xc00054e1e0) Stream removed, broadcasting: 3\nI0605 00:46:40.916288 2700 log.go:172] (0xc000ad2e70) (0xc0004b6d20) Stream removed, broadcasting: 5\n" Jun 5 00:46:40.919: INFO: stdout: "" Jun 5 00:46:40.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1185 execpod-affinityf4949 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30248/ ; done' Jun 5 00:46:41.264: INFO: stderr: "I0605 00:46:41.072547 2720 log.go:172] (0xc00003a840) (0xc0000f3900) Create stream\nI0605 00:46:41.072600 2720 log.go:172] (0xc00003a840) (0xc0000f3900) Stream added, broadcasting: 1\nI0605 00:46:41.084246 2720 log.go:172] (0xc00003a840) Reply frame received for 1\nI0605 00:46:41.084301 2720 log.go:172] (0xc00003a840) (0xc00015f900) Create stream\nI0605 00:46:41.084316 2720 log.go:172] (0xc00003a840) (0xc00015f900) Stream added, broadcasting: 3\nI0605 00:46:41.085972 2720 log.go:172] (0xc00003a840) Reply frame received for 3\nI0605 00:46:41.086025 2720 log.go:172] (0xc00003a840) (0xc0005121e0) Create stream\nI0605 00:46:41.086043 2720 log.go:172] (0xc00003a840) (0xc0005121e0) Stream added, broadcasting: 5\nI0605 00:46:41.088435 2720 log.go:172] (0xc00003a840) Reply frame received for 5\nI0605 00:46:41.156780 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.156817 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.156831 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.156853 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.156861 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.156869 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.178074 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.178099 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.178112 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.178701 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.178737 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.178767 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.178796 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.178814 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.178837 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.182541 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.182559 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.182572 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.183078 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.183096 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.183110 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.183125 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.183136 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.183148 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.190157 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.190199 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.190232 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.190630 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.190661 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.190678 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\nI0605 00:46:41.190692 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.190706 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.190724 2720 log.go:172] (0xc00015f900) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.194663 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.194686 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.194701 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.195189 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.195215 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.195233 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.195264 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.195288 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.195320 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.199200 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.199226 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.199264 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.199679 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.199694 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.199700 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.199709 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.199714 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.199718 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\nI0605 00:46:41.199723 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.199728 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.199736 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\nI0605 00:46:41.204180 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.204219 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.204257 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.204614 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.204652 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.204666 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.204689 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.204708 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.204729 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.208960 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.209002 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.209039 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.209396 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.209433 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.209460 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\nI0605 00:46:41.209475 2720 log.go:172] (0xc00003a840) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0605 00:46:41.209499 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.209563 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n http://172.17.0.13:30248/\nI0605 00:46:41.209585 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.209609 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.209620 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.213464 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.213485 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.213495 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.214185 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.214233 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.214270 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.214299 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.214334 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.214357 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.218258 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.218274 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.218282 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.218784 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.218809 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.218823 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.218840 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.218855 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.218867 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.223083 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.223098 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.223109 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.223422 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.223433 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.223440 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\nI0605 00:46:41.223548 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.223566 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.223574 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.223607 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.223633 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.223657 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.228239 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.228260 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.228277 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.228677 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.228688 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.228696 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.228712 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.228729 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.228743 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curlI0605 00:46:41.228758 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.228789 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.228806 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.233510 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.233544 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.233565 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.234006 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.234037 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.234050 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.234073 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.234096 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.234123 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.238176 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.238198 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.238209 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.238718 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.238744 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.238763 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.238944 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.238963 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.238981 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.242907 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.242934 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.242951 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.243413 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.243448 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.243474 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.243486 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.243495 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.243510 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.248177 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.248222 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.248259 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.248670 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.248701 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.248713 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.248733 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.248753 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.248781 2720 log.go:172] (0xc0005121e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.254218 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.254335 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.254451 2720 log.go:172] (0xc00015f900) (3) Data frame sent\nI0605 00:46:41.254556 2720 log.go:172] (0xc00003a840) Data frame received for 5\nI0605 00:46:41.254594 2720 log.go:172] (0xc00003a840) Data frame received for 3\nI0605 00:46:41.254634 2720 log.go:172] (0xc00015f900) (3) Data frame handling\nI0605 00:46:41.254682 2720 log.go:172] (0xc0005121e0) (5) Data frame handling\nI0605 00:46:41.256136 2720 log.go:172] (0xc00003a840) Data frame received for 1\nI0605 00:46:41.256233 2720 log.go:172] (0xc0000f3900) (1) Data frame handling\nI0605 00:46:41.256315 2720 log.go:172] (0xc0000f3900) (1) Data frame sent\nI0605 00:46:41.256356 2720 log.go:172] (0xc00003a840) (0xc0000f3900) Stream removed, broadcasting: 1\nI0605 00:46:41.256425 2720 log.go:172] (0xc00003a840) Go away received\nI0605 00:46:41.256821 2720 log.go:172] (0xc00003a840) (0xc0000f3900) Stream removed, broadcasting: 1\nI0605 00:46:41.256850 2720 log.go:172] (0xc00003a840) (0xc00015f900) Stream removed, broadcasting: 3\nI0605 00:46:41.256867 2720 log.go:172] (0xc00003a840) (0xc0005121e0) Stream removed, broadcasting: 5\n" Jun 5 00:46:41.265: INFO: stdout: "\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-ndv8d\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-ndv8d\naffinity-nodeport-transition-nc565\naffinity-nodeport-transition-ndv8d\naffinity-nodeport-transition-ndv8d\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-nc565\naffinity-nodeport-transition-nc565\naffinity-nodeport-transition-nc565\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-ndv8d\naffinity-nodeport-transition-nc565" Jun 5 00:46:41.265: INFO: Received response from host: Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-ndv8d Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-ndv8d Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-nc565 Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-ndv8d Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-ndv8d Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-nc565 Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-nc565 Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-nc565 Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-ndv8d Jun 5 00:46:41.265: INFO: Received response from host: affinity-nodeport-transition-nc565 Jun 5 00:46:41.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1185 execpod-affinityf4949 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30248/ ; done' Jun 5 00:46:41.596: INFO: stderr: "I0605 00:46:41.415000 2740 log.go:172] (0xc000a8d760) (0xc0006c9c20) Create stream\nI0605 00:46:41.415058 2740 log.go:172] (0xc000a8d760) (0xc0006c9c20) Stream added, broadcasting: 1\nI0605 00:46:41.420134 2740 log.go:172] (0xc000a8d760) Reply frame received for 1\nI0605 00:46:41.420214 2740 log.go:172] (0xc000a8d760) (0xc0006985a0) Create stream\nI0605 00:46:41.420241 2740 log.go:172] (0xc000a8d760) (0xc0006985a0) Stream added, broadcasting: 3\nI0605 00:46:41.421637 2740 log.go:172] (0xc000a8d760) Reply frame received for 3\nI0605 00:46:41.421679 2740 log.go:172] (0xc000a8d760) (0xc000698aa0) Create stream\nI0605 00:46:41.421689 2740 log.go:172] (0xc000a8d760) (0xc000698aa0) Stream added, broadcasting: 5\nI0605 00:46:41.422712 2740 log.go:172] (0xc000a8d760) Reply frame received for 5\nI0605 00:46:41.494668 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.494723 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.494741 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.494770 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.494781 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.494807 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.498571 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.498585 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.498591 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.499409 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.499446 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.499471 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.499502 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.499521 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.499536 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.505486 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.505512 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.505528 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.506026 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.506042 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.506048 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.506145 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.506163 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.506186 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.510093 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.510107 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.510115 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.510788 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.510802 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.510810 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.510905 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.510923 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.510946 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.515562 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.515581 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.515600 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.516545 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.516580 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.516591 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.516605 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.516619 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.516627 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.524471 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.524500 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.524521 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.524965 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.524985 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.525002 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.525278 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.525315 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.525347 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.531336 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.531368 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.531396 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.531689 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.531728 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.531767 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.531793 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.531815 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.531841 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.538639 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.538683 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.538711 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.539565 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.539599 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.539621 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.539652 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.539674 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.539704 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.544289 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.544313 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.544333 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.544879 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.544937 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.544963 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.544991 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.545018 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.545047 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.549378 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.549416 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.549436 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.549741 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.549787 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.549833 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.549855 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.549875 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.549904 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.553615 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.553642 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.553663 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.554654 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.554683 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.554711 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.554725 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.554738 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.554792 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.559439 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.559485 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.559514 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.560103 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.560132 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.560163 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.560177 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.560192 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.560206 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.563756 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.563794 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.563828 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.564200 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.564221 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.564244 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.564276 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.564296 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.564319 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.570454 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.570491 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.570683 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.571169 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.571292 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.571324 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.571349 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.571360 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.571385 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.576084 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.576115 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.576142 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.576708 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.576732 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.576746 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.576771 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.576802 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.576829 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.582084 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.582105 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.582116 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.582725 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.582745 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.582757 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.582785 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.582818 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.582852 2740 log.go:172] (0xc000698aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30248/\nI0605 00:46:41.586987 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.587314 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.587369 2740 log.go:172] (0xc0006985a0) (3) Data frame sent\nI0605 00:46:41.587447 2740 log.go:172] (0xc000a8d760) Data frame received for 3\nI0605 00:46:41.587469 2740 log.go:172] (0xc0006985a0) (3) Data frame handling\nI0605 00:46:41.587581 2740 log.go:172] (0xc000a8d760) Data frame received for 5\nI0605 00:46:41.587602 2740 log.go:172] (0xc000698aa0) (5) Data frame handling\nI0605 00:46:41.589925 2740 log.go:172] (0xc000a8d760) Data frame received for 1\nI0605 00:46:41.589957 2740 log.go:172] (0xc0006c9c20) (1) Data frame handling\nI0605 00:46:41.589983 2740 log.go:172] (0xc0006c9c20) (1) Data frame sent\nI0605 00:46:41.590018 2740 log.go:172] (0xc000a8d760) (0xc0006c9c20) Stream removed, broadcasting: 1\nI0605 00:46:41.590050 2740 log.go:172] (0xc000a8d760) Go away received\nI0605 00:46:41.590342 2740 log.go:172] (0xc000a8d760) (0xc0006c9c20) Stream removed, broadcasting: 1\nI0605 00:46:41.590361 2740 log.go:172] (0xc000a8d760) (0xc0006985a0) Stream removed, broadcasting: 3\nI0605 00:46:41.590371 2740 log.go:172] (0xc000a8d760) (0xc000698aa0) Stream removed, broadcasting: 5\n" Jun 5 00:46:41.597: INFO: stdout: "\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx\naffinity-nodeport-transition-n85vx" Jun 5 00:46:41.597: INFO: Received response from host: Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Received response from host: affinity-nodeport-transition-n85vx Jun 5 00:46:41.597: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1185, will wait for the garbage collector to delete the pods Jun 5 00:46:41.718: INFO: Deleting ReplicationController affinity-nodeport-transition took: 28.444123ms Jun 5 00:46:42.119: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.290987ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:46:55.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1185" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:29.597 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":226,"skipped":3717,"failed":0} [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:46:55.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-2347 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2347 to expose endpoints map[] Jun 5 00:46:55.548: INFO: Get endpoints failed (40.095417ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 5 00:46:56.553: INFO: successfully validated that service multi-endpoint-test in namespace services-2347 exposes endpoints map[] (1.044723589s elapsed) STEP: Creating pod pod1 in namespace services-2347 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2347 to expose endpoints map[pod1:[100]] Jun 5 00:47:00.707: INFO: successfully validated that service multi-endpoint-test in namespace services-2347 exposes endpoints map[pod1:[100]] (4.144818096s elapsed) STEP: Creating pod pod2 in namespace services-2347 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2347 to expose endpoints map[pod1:[100] pod2:[101]] Jun 5 00:47:04.015: INFO: successfully validated that service multi-endpoint-test in namespace services-2347 exposes endpoints map[pod1:[100] pod2:[101]] (3.303291559s elapsed) STEP: Deleting pod pod1 in namespace services-2347 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2347 to expose endpoints map[pod2:[101]] Jun 5 00:47:05.112: INFO: successfully validated that service multi-endpoint-test in namespace services-2347 exposes endpoints map[pod2:[101]] (1.091541855s elapsed) STEP: Deleting pod pod2 in namespace services-2347 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2347 to expose endpoints map[] Jun 5 00:47:06.139: INFO: successfully validated that service multi-endpoint-test in namespace services-2347 exposes endpoints map[] (1.020625188s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:47:06.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2347" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.935 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":227,"skipped":3717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:47:06.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 5 00:47:06.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-859' Jun 5 00:47:06.455: INFO: stderr: "" Jun 5 00:47:06.455: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 5 00:47:11.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-859 -o json' Jun 5 00:47:11.599: INFO: stderr: "" Jun 5 00:47:11.599: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-05T00:47:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-05T00:47:06Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.223\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-05T00:47:09Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-859\",\n \"resourceVersion\": \"10345351\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-859/pods/e2e-test-httpd-pod\",\n \"uid\": \"498adc25-2178-4c1c-9e25-53cddc45ca3c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-f9kbs\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-f9kbs\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-f9kbs\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-05T00:47:06Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-05T00:47:09Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-05T00:47:09Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-05T00:47:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://0f5361f393d1a92426ae7b17efbf88cc8f33534aeaa22ed54b24944da2d9d33c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-05T00:47:09Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.223\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.223\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-05T00:47:06Z\"\n }\n}\n" STEP: replace the image in the pod Jun 5 00:47:11.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-859' Jun 5 00:47:11.833: INFO: stderr: "" Jun 5 00:47:11.833: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 Jun 5 00:47:11.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-859' Jun 5 00:47:24.872: INFO: stderr: "" Jun 5 00:47:24.872: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:47:24.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-859" for this suite. • [SLOW TEST:18.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":228,"skipped":3775,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:47:24.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 5 00:47:25.500: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created Jun 5 00:47:27.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914845, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914845, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914845, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914845, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:47:30.579: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:47:30.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:47:31.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9584" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.035 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":229,"skipped":3788,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:47:31.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 00:47:32.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 00:47:34.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914852, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914852, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914852, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726914852, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:47:37.927: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:47:38.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-883" for this suite. STEP: Destroying namespace "webhook-883-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":230,"skipped":3802,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:47:38.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-5c537a57-bb3b-4b24-9aef-7c1df1a4d288 STEP: Creating a pod to test consume secrets Jun 5 00:47:38.460: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89" in namespace "projected-1315" to be "Succeeded or Failed" Jun 5 00:47:38.604: INFO: Pod "pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89": Phase="Pending", Reason="", readiness=false. Elapsed: 144.746849ms Jun 5 00:47:40.609: INFO: Pod "pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149579715s Jun 5 00:47:42.613: INFO: Pod "pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89": Phase="Running", Reason="", readiness=true. Elapsed: 4.153327643s Jun 5 00:47:44.618: INFO: Pod "pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157868281s STEP: Saw pod success Jun 5 00:47:44.618: INFO: Pod "pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89" satisfied condition "Succeeded or Failed" Jun 5 00:47:44.621: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89 container projected-secret-volume-test: STEP: delete the pod Jun 5 00:47:44.642: INFO: Waiting for pod pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89 to disappear Jun 5 00:47:44.646: INFO: Pod pod-projected-secrets-0dc41fdc-e938-448b-bcd2-b77651ad1f89 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:47:44.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1315" for this suite. • [SLOW TEST:6.381 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3802,"failed":0} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:47:44.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:47:44.702: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3255 I0605 00:47:44.759288 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3255, replica count: 1 I0605 00:47:45.809716 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:47:46.809983 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:47:47.810231 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:47:48.810456 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:47:48.947: INFO: Created: latency-svc-2mmv7 Jun 5 00:47:48.972: INFO: Got endpoints: latency-svc-2mmv7 [61.312158ms] Jun 5 00:47:49.052: INFO: Created: latency-svc-hk4mm Jun 5 00:47:49.074: INFO: Got endpoints: latency-svc-hk4mm [102.349941ms] Jun 5 00:47:49.076: INFO: Created: latency-svc-4qfzc Jun 5 00:47:49.109: INFO: Got endpoints: latency-svc-4qfzc [136.941253ms] Jun 5 00:47:49.145: INFO: Created: latency-svc-q6w2n Jun 5 00:47:49.184: INFO: Got endpoints: latency-svc-q6w2n [212.591354ms] Jun 5 00:47:49.191: INFO: Created: latency-svc-tl4xc Jun 5 00:47:49.202: INFO: Got endpoints: latency-svc-tl4xc [230.784991ms] Jun 5 00:47:49.242: INFO: Created: latency-svc-2t9qk Jun 5 00:47:49.256: INFO: Got endpoints: latency-svc-2t9qk [284.112194ms] Jun 5 00:47:49.272: INFO: Created: latency-svc-skjtc Jun 5 00:47:49.345: INFO: Got endpoints: latency-svc-skjtc [142.957834ms] Jun 5 00:47:49.372: INFO: Created: latency-svc-gs6nw Jun 5 00:47:49.376: INFO: Got endpoints: latency-svc-gs6nw [404.492178ms] Jun 5 00:47:49.403: INFO: Created: latency-svc-z8r25 Jun 5 00:47:49.413: INFO: Got endpoints: latency-svc-z8r25 [441.360078ms] Jun 5 00:47:49.433: INFO: Created: latency-svc-d56bg Jun 5 00:47:49.483: INFO: Got endpoints: latency-svc-d56bg [511.335954ms] Jun 5 00:47:49.496: INFO: Created: latency-svc-62s2n Jun 5 00:47:49.527: INFO: Got endpoints: latency-svc-62s2n [555.19137ms] Jun 5 00:47:49.574: INFO: Created: latency-svc-gkq2g Jun 5 00:47:49.658: INFO: Got endpoints: latency-svc-gkq2g [686.03854ms] Jun 5 00:47:49.679: INFO: Created: latency-svc-dvtb8 Jun 5 00:47:49.693: INFO: Got endpoints: latency-svc-dvtb8 [720.983535ms] Jun 5 00:47:49.740: INFO: Created: latency-svc-lgvxp Jun 5 00:47:49.801: INFO: Got endpoints: latency-svc-lgvxp [829.028443ms] Jun 5 00:47:49.854: INFO: Created: latency-svc-mrs5h Jun 5 00:47:49.896: INFO: Got endpoints: latency-svc-mrs5h [924.463902ms] Jun 5 00:47:49.952: INFO: Created: latency-svc-zhfqm Jun 5 00:47:49.960: INFO: Got endpoints: latency-svc-zhfqm [987.979188ms] Jun 5 00:47:49.984: INFO: Created: latency-svc-gfrnn Jun 5 00:47:49.996: INFO: Got endpoints: latency-svc-gfrnn [1.023995003s] Jun 5 00:47:50.046: INFO: Created: latency-svc-gdk56 Jun 5 00:47:50.088: INFO: Got endpoints: latency-svc-gdk56 [1.013415033s] Jun 5 00:47:50.106: INFO: Created: latency-svc-tdr4x Jun 5 00:47:50.122: INFO: Got endpoints: latency-svc-tdr4x [1.013593231s] Jun 5 00:47:50.142: INFO: Created: latency-svc-895jn Jun 5 00:47:50.153: INFO: Got endpoints: latency-svc-895jn [969.096025ms] Jun 5 00:47:50.179: INFO: Created: latency-svc-rxscp Jun 5 00:47:50.232: INFO: Got endpoints: latency-svc-rxscp [975.586623ms] Jun 5 00:47:50.238: INFO: Created: latency-svc-7kmwh Jun 5 00:47:50.250: INFO: Got endpoints: latency-svc-7kmwh [904.461134ms] Jun 5 00:47:50.284: INFO: Created: latency-svc-n9xn6 Jun 5 00:47:50.305: INFO: Got endpoints: latency-svc-n9xn6 [929.223226ms] Jun 5 00:47:50.376: INFO: Created: latency-svc-kglr8 Jun 5 00:47:50.389: INFO: Got endpoints: latency-svc-kglr8 [975.694695ms] Jun 5 00:47:50.422: INFO: Created: latency-svc-l5tl7 Jun 5 00:47:50.435: INFO: Got endpoints: latency-svc-l5tl7 [951.890327ms] Jun 5 00:47:50.466: INFO: Created: latency-svc-zjdnv Jun 5 00:47:50.580: INFO: Got endpoints: latency-svc-zjdnv [1.052863278s] Jun 5 00:47:50.587: INFO: Created: latency-svc-9jsfx Jun 5 00:47:50.599: INFO: Got endpoints: latency-svc-9jsfx [941.230384ms] Jun 5 00:47:50.621: INFO: Created: latency-svc-hlmwr Jun 5 00:47:50.642: INFO: Got endpoints: latency-svc-hlmwr [949.487192ms] Jun 5 00:47:50.663: INFO: Created: latency-svc-bndjq Jun 5 00:47:50.729: INFO: Got endpoints: latency-svc-bndjq [928.019173ms] Jun 5 00:47:50.742: INFO: Created: latency-svc-fxgmn Jun 5 00:47:50.756: INFO: Got endpoints: latency-svc-fxgmn [859.772845ms] Jun 5 00:47:50.807: INFO: Created: latency-svc-7zclq Jun 5 00:47:50.824: INFO: Got endpoints: latency-svc-7zclq [864.162587ms] Jun 5 00:47:50.885: INFO: Created: latency-svc-68vcs Jun 5 00:47:50.891: INFO: Got endpoints: latency-svc-68vcs [894.996623ms] Jun 5 00:47:50.910: INFO: Created: latency-svc-p5h5s Jun 5 00:47:50.934: INFO: Got endpoints: latency-svc-p5h5s [846.172786ms] Jun 5 00:47:50.958: INFO: Created: latency-svc-xqvrb Jun 5 00:47:51.052: INFO: Got endpoints: latency-svc-xqvrb [929.258765ms] Jun 5 00:47:51.054: INFO: Created: latency-svc-s8k7s Jun 5 00:47:51.064: INFO: Got endpoints: latency-svc-s8k7s [910.529521ms] Jun 5 00:47:51.083: INFO: Created: latency-svc-mkt9m Jun 5 00:47:51.094: INFO: Got endpoints: latency-svc-mkt9m [862.155776ms] Jun 5 00:47:51.121: INFO: Created: latency-svc-sfxst Jun 5 00:47:51.136: INFO: Got endpoints: latency-svc-sfxst [885.966932ms] Jun 5 00:47:51.208: INFO: Created: latency-svc-c9vdv Jun 5 00:47:51.214: INFO: Got endpoints: latency-svc-c9vdv [908.860921ms] Jun 5 00:47:51.233: INFO: Created: latency-svc-cbkkf Jun 5 00:47:51.245: INFO: Got endpoints: latency-svc-cbkkf [855.750644ms] Jun 5 00:47:51.263: INFO: Created: latency-svc-j5lwf Jun 5 00:47:51.275: INFO: Got endpoints: latency-svc-j5lwf [839.882531ms] Jun 5 00:47:51.295: INFO: Created: latency-svc-vw5qv Jun 5 00:47:51.305: INFO: Got endpoints: latency-svc-vw5qv [725.271037ms] Jun 5 00:47:51.384: INFO: Created: latency-svc-9vs4v Jun 5 00:47:51.395: INFO: Got endpoints: latency-svc-9vs4v [796.203044ms] Jun 5 00:47:51.443: INFO: Created: latency-svc-2p82k Jun 5 00:47:51.456: INFO: Got endpoints: latency-svc-2p82k [813.695763ms] Jun 5 00:47:51.502: INFO: Created: latency-svc-dtb7h Jun 5 00:47:51.508: INFO: Got endpoints: latency-svc-dtb7h [779.072537ms] Jun 5 00:47:51.540: INFO: Created: latency-svc-gf9l5 Jun 5 00:47:51.550: INFO: Got endpoints: latency-svc-gf9l5 [793.640512ms] Jun 5 00:47:51.570: INFO: Created: latency-svc-wq42v Jun 5 00:47:51.588: INFO: Got endpoints: latency-svc-wq42v [763.598721ms] Jun 5 00:47:51.670: INFO: Created: latency-svc-g7z6s Jun 5 00:47:51.676: INFO: Got endpoints: latency-svc-g7z6s [785.029259ms] Jun 5 00:47:51.694: INFO: Created: latency-svc-vkvll Jun 5 00:47:51.714: INFO: Got endpoints: latency-svc-vkvll [780.475467ms] Jun 5 00:47:51.873: INFO: Created: latency-svc-zhhvl Jun 5 00:47:51.898: INFO: Created: latency-svc-x2lqr Jun 5 00:47:51.899: INFO: Got endpoints: latency-svc-zhhvl [846.921757ms] Jun 5 00:47:51.923: INFO: Got endpoints: latency-svc-x2lqr [858.654723ms] Jun 5 00:47:51.954: INFO: Created: latency-svc-g65sw Jun 5 00:47:51.966: INFO: Got endpoints: latency-svc-g65sw [871.632462ms] Jun 5 00:47:52.068: INFO: Created: latency-svc-fddw2 Jun 5 00:47:52.079: INFO: Got endpoints: latency-svc-fddw2 [943.395945ms] Jun 5 00:47:52.126: INFO: Created: latency-svc-57hjs Jun 5 00:47:52.184: INFO: Got endpoints: latency-svc-57hjs [969.866626ms] Jun 5 00:47:52.218: INFO: Created: latency-svc-qml65 Jun 5 00:47:52.230: INFO: Got endpoints: latency-svc-qml65 [985.495117ms] Jun 5 00:47:52.278: INFO: Created: latency-svc-p7lf2 Jun 5 00:47:52.339: INFO: Got endpoints: latency-svc-p7lf2 [1.064405499s] Jun 5 00:47:52.366: INFO: Created: latency-svc-5zpxp Jun 5 00:47:52.404: INFO: Got endpoints: latency-svc-5zpxp [1.09855117s] Jun 5 00:47:52.502: INFO: Created: latency-svc-srtqk Jun 5 00:47:52.513: INFO: Got endpoints: latency-svc-srtqk [1.11735292s] Jun 5 00:47:52.546: INFO: Created: latency-svc-t6ln8 Jun 5 00:47:52.570: INFO: Got endpoints: latency-svc-t6ln8 [1.114108483s] Jun 5 00:47:52.595: INFO: Created: latency-svc-psb29 Jun 5 00:47:52.673: INFO: Got endpoints: latency-svc-psb29 [1.165058761s] Jun 5 00:47:52.676: INFO: Created: latency-svc-zzt5k Jun 5 00:47:52.686: INFO: Got endpoints: latency-svc-zzt5k [1.13579664s] Jun 5 00:47:52.704: INFO: Created: latency-svc-dp7ks Jun 5 00:47:52.732: INFO: Got endpoints: latency-svc-dp7ks [1.144172013s] Jun 5 00:47:52.762: INFO: Created: latency-svc-c9bs8 Jun 5 00:47:52.812: INFO: Got endpoints: latency-svc-c9bs8 [1.136407234s] Jun 5 00:47:52.822: INFO: Created: latency-svc-bmq2f Jun 5 00:47:52.838: INFO: Got endpoints: latency-svc-bmq2f [1.123777474s] Jun 5 00:47:52.896: INFO: Created: latency-svc-zqzd9 Jun 5 00:47:52.911: INFO: Got endpoints: latency-svc-zqzd9 [1.012453863s] Jun 5 00:47:52.987: INFO: Created: latency-svc-kkd8z Jun 5 00:47:53.013: INFO: Got endpoints: latency-svc-kkd8z [1.090695449s] Jun 5 00:47:53.059: INFO: Created: latency-svc-7ch2p Jun 5 00:47:53.161: INFO: Got endpoints: latency-svc-7ch2p [1.195066301s] Jun 5 00:47:53.172: INFO: Created: latency-svc-rhsjj Jun 5 00:47:53.194: INFO: Got endpoints: latency-svc-rhsjj [1.114731314s] Jun 5 00:47:53.219: INFO: Created: latency-svc-xfsms Jun 5 00:47:53.230: INFO: Got endpoints: latency-svc-xfsms [1.045565874s] Jun 5 00:47:53.322: INFO: Created: latency-svc-x6tt5 Jun 5 00:47:53.332: INFO: Got endpoints: latency-svc-x6tt5 [1.101679977s] Jun 5 00:47:53.387: INFO: Created: latency-svc-b6tcz Jun 5 00:47:53.398: INFO: Got endpoints: latency-svc-b6tcz [1.058898239s] Jun 5 00:47:53.477: INFO: Created: latency-svc-6mpc2 Jun 5 00:47:53.502: INFO: Got endpoints: latency-svc-6mpc2 [1.098340943s] Jun 5 00:47:53.555: INFO: Created: latency-svc-88smb Jun 5 00:47:53.609: INFO: Got endpoints: latency-svc-88smb [1.096609443s] Jun 5 00:47:53.627: INFO: Created: latency-svc-k9bbb Jun 5 00:47:53.664: INFO: Got endpoints: latency-svc-k9bbb [1.093989517s] Jun 5 00:47:53.695: INFO: Created: latency-svc-mg6lk Jun 5 00:47:53.771: INFO: Got endpoints: latency-svc-mg6lk [1.09722248s] Jun 5 00:47:53.773: INFO: Created: latency-svc-xr28q Jun 5 00:47:53.832: INFO: Got endpoints: latency-svc-xr28q [1.146506999s] Jun 5 00:47:53.868: INFO: Created: latency-svc-7dvjb Jun 5 00:47:53.963: INFO: Got endpoints: latency-svc-7dvjb [1.23052433s] Jun 5 00:47:53.964: INFO: Created: latency-svc-b9kvk Jun 5 00:47:53.972: INFO: Got endpoints: latency-svc-b9kvk [1.159127724s] Jun 5 00:47:53.993: INFO: Created: latency-svc-ml2qt Jun 5 00:47:54.002: INFO: Got endpoints: latency-svc-ml2qt [1.163879367s] Jun 5 00:47:54.019: INFO: Created: latency-svc-c8z5h Jun 5 00:47:54.055: INFO: Got endpoints: latency-svc-c8z5h [1.143737789s] Jun 5 00:47:54.119: INFO: Created: latency-svc-dq65x Jun 5 00:47:54.129: INFO: Got endpoints: latency-svc-dq65x [1.11580956s] Jun 5 00:47:54.155: INFO: Created: latency-svc-rgk4t Jun 5 00:47:54.165: INFO: Got endpoints: latency-svc-rgk4t [1.00415855s] Jun 5 00:47:54.191: INFO: Created: latency-svc-mw4vb Jun 5 00:47:54.202: INFO: Got endpoints: latency-svc-mw4vb [1.007464733s] Jun 5 00:47:54.262: INFO: Created: latency-svc-clzlc Jun 5 00:47:54.266: INFO: Got endpoints: latency-svc-clzlc [1.035734901s] Jun 5 00:47:54.334: INFO: Created: latency-svc-hfttj Jun 5 00:47:54.346: INFO: Got endpoints: latency-svc-hfttj [1.014348987s] Jun 5 00:47:54.415: INFO: Created: latency-svc-fblps Jun 5 00:47:54.415: INFO: Got endpoints: latency-svc-fblps [1.016663611s] Jun 5 00:47:54.462: INFO: Created: latency-svc-8skzd Jun 5 00:47:54.473: INFO: Got endpoints: latency-svc-8skzd [970.806145ms] Jun 5 00:47:54.493: INFO: Created: latency-svc-crbp5 Jun 5 00:47:54.550: INFO: Got endpoints: latency-svc-crbp5 [940.25323ms] Jun 5 00:47:54.587: INFO: Created: latency-svc-g6rm7 Jun 5 00:47:54.599: INFO: Got endpoints: latency-svc-g6rm7 [935.118555ms] Jun 5 00:47:54.629: INFO: Created: latency-svc-jxtb7 Jun 5 00:47:54.696: INFO: Got endpoints: latency-svc-jxtb7 [925.533957ms] Jun 5 00:47:54.708: INFO: Created: latency-svc-m9tpw Jun 5 00:47:54.726: INFO: Got endpoints: latency-svc-m9tpw [893.842174ms] Jun 5 00:47:54.767: INFO: Created: latency-svc-hbgnk Jun 5 00:47:54.791: INFO: Got endpoints: latency-svc-hbgnk [828.261664ms] Jun 5 00:47:54.848: INFO: Created: latency-svc-m2czh Jun 5 00:47:54.858: INFO: Got endpoints: latency-svc-m2czh [886.794511ms] Jun 5 00:47:54.924: INFO: Created: latency-svc-n9kzg Jun 5 00:47:54.941: INFO: Got endpoints: latency-svc-n9kzg [938.728911ms] Jun 5 00:47:55.010: INFO: Created: latency-svc-682cc Jun 5 00:47:55.020: INFO: Got endpoints: latency-svc-682cc [964.875994ms] Jun 5 00:47:55.046: INFO: Created: latency-svc-qsp5h Jun 5 00:47:55.080: INFO: Got endpoints: latency-svc-qsp5h [950.769414ms] Jun 5 00:47:55.110: INFO: Created: latency-svc-vn7nv Jun 5 00:47:55.178: INFO: Got endpoints: latency-svc-vn7nv [1.012803174s] Jun 5 00:47:55.180: INFO: Created: latency-svc-6dxbk Jun 5 00:47:55.200: INFO: Got endpoints: latency-svc-6dxbk [998.136144ms] Jun 5 00:47:55.236: INFO: Created: latency-svc-xkkj6 Jun 5 00:47:55.255: INFO: Got endpoints: latency-svc-xkkj6 [988.997807ms] Jun 5 00:47:55.315: INFO: Created: latency-svc-zbl59 Jun 5 00:47:55.319: INFO: Got endpoints: latency-svc-zbl59 [972.8072ms] Jun 5 00:47:55.361: INFO: Created: latency-svc-5fm6d Jun 5 00:47:55.377: INFO: Got endpoints: latency-svc-5fm6d [961.696841ms] Jun 5 00:47:55.392: INFO: Created: latency-svc-5rhtw Jun 5 00:47:55.406: INFO: Got endpoints: latency-svc-5rhtw [932.88191ms] Jun 5 00:47:55.472: INFO: Created: latency-svc-zthfq Jun 5 00:47:55.474: INFO: Got endpoints: latency-svc-zthfq [924.620327ms] Jun 5 00:47:55.505: INFO: Created: latency-svc-md9sn Jun 5 00:47:55.521: INFO: Got endpoints: latency-svc-md9sn [921.66369ms] Jun 5 00:47:55.541: INFO: Created: latency-svc-l7fdx Jun 5 00:47:55.557: INFO: Got endpoints: latency-svc-l7fdx [860.419754ms] Jun 5 00:47:55.607: INFO: Created: latency-svc-49ntq Jun 5 00:47:55.632: INFO: Got endpoints: latency-svc-49ntq [905.825922ms] Jun 5 00:47:55.662: INFO: Created: latency-svc-jzzcj Jun 5 00:47:55.671: INFO: Got endpoints: latency-svc-jzzcj [879.888272ms] Jun 5 00:47:55.735: INFO: Created: latency-svc-rs5mt Jun 5 00:47:55.738: INFO: Got endpoints: latency-svc-rs5mt [879.087962ms] Jun 5 00:47:55.768: INFO: Created: latency-svc-zqg57 Jun 5 00:47:55.804: INFO: Got endpoints: latency-svc-zqg57 [863.393955ms] Jun 5 00:47:55.867: INFO: Created: latency-svc-kfjqc Jun 5 00:47:55.870: INFO: Got endpoints: latency-svc-kfjqc [849.748425ms] Jun 5 00:47:55.944: INFO: Created: latency-svc-8bwrg Jun 5 00:47:55.992: INFO: Got endpoints: latency-svc-8bwrg [912.005201ms] Jun 5 00:47:56.027: INFO: Created: latency-svc-5g42h Jun 5 00:47:56.044: INFO: Got endpoints: latency-svc-5g42h [866.56041ms] Jun 5 00:47:56.087: INFO: Created: latency-svc-vnnzs Jun 5 00:47:56.124: INFO: Got endpoints: latency-svc-vnnzs [924.095683ms] Jun 5 00:47:56.142: INFO: Created: latency-svc-m9ghn Jun 5 00:47:56.153: INFO: Got endpoints: latency-svc-m9ghn [898.214655ms] Jun 5 00:47:56.172: INFO: Created: latency-svc-pl94h Jun 5 00:47:56.207: INFO: Got endpoints: latency-svc-pl94h [887.78207ms] Jun 5 00:47:56.256: INFO: Created: latency-svc-pq5zk Jun 5 00:47:56.268: INFO: Got endpoints: latency-svc-pq5zk [890.664542ms] Jun 5 00:47:56.296: INFO: Created: latency-svc-2gvk4 Jun 5 00:47:56.447: INFO: Got endpoints: latency-svc-2gvk4 [1.041243255s] Jun 5 00:47:56.508: INFO: Created: latency-svc-6zvnn Jun 5 00:47:56.526: INFO: Got endpoints: latency-svc-6zvnn [1.051425284s] Jun 5 00:47:56.629: INFO: Created: latency-svc-t646p Jun 5 00:47:56.651: INFO: Got endpoints: latency-svc-t646p [1.129702852s] Jun 5 00:47:56.681: INFO: Created: latency-svc-clqmx Jun 5 00:47:56.695: INFO: Got endpoints: latency-svc-clqmx [1.138295839s] Jun 5 00:47:56.766: INFO: Created: latency-svc-gzm7c Jun 5 00:47:56.779: INFO: Got endpoints: latency-svc-gzm7c [1.146698552s] Jun 5 00:47:56.802: INFO: Created: latency-svc-pq4tj Jun 5 00:47:56.815: INFO: Got endpoints: latency-svc-pq4tj [1.143880466s] Jun 5 00:47:56.838: INFO: Created: latency-svc-d2szw Jun 5 00:47:56.851: INFO: Got endpoints: latency-svc-d2szw [1.113204049s] Jun 5 00:47:56.902: INFO: Created: latency-svc-ckqns Jun 5 00:47:56.928: INFO: Created: latency-svc-7nnsv Jun 5 00:47:56.930: INFO: Got endpoints: latency-svc-ckqns [1.125217734s] Jun 5 00:47:56.985: INFO: Got endpoints: latency-svc-7nnsv [1.115495561s] Jun 5 00:47:57.046: INFO: Created: latency-svc-s76kt Jun 5 00:47:57.052: INFO: Got endpoints: latency-svc-s76kt [1.060316392s] Jun 5 00:47:57.078: INFO: Created: latency-svc-rp8lb Jun 5 00:47:57.089: INFO: Got endpoints: latency-svc-rp8lb [1.044639598s] Jun 5 00:47:57.119: INFO: Created: latency-svc-mdpmm Jun 5 00:47:57.131: INFO: Got endpoints: latency-svc-mdpmm [1.007117236s] Jun 5 00:47:57.184: INFO: Created: latency-svc-682fn Jun 5 00:47:57.223: INFO: Created: latency-svc-258vd Jun 5 00:47:57.223: INFO: Got endpoints: latency-svc-682fn [1.070129691s] Jun 5 00:47:57.240: INFO: Got endpoints: latency-svc-258vd [1.032881098s] Jun 5 00:47:57.258: INFO: Created: latency-svc-r9rpf Jun 5 00:47:57.270: INFO: Got endpoints: latency-svc-r9rpf [1.002627191s] Jun 5 00:47:57.322: INFO: Created: latency-svc-77k8b Jun 5 00:47:57.325: INFO: Got endpoints: latency-svc-77k8b [877.765412ms] Jun 5 00:47:57.365: INFO: Created: latency-svc-zfkg4 Jun 5 00:47:57.379: INFO: Got endpoints: latency-svc-zfkg4 [852.829215ms] Jun 5 00:47:57.418: INFO: Created: latency-svc-mk44j Jun 5 00:47:57.459: INFO: Got endpoints: latency-svc-mk44j [808.256829ms] Jun 5 00:47:57.474: INFO: Created: latency-svc-lfjbl Jun 5 00:47:57.498: INFO: Got endpoints: latency-svc-lfjbl [803.066035ms] Jun 5 00:47:57.522: INFO: Created: latency-svc-9qp95 Jun 5 00:47:57.536: INFO: Got endpoints: latency-svc-9qp95 [757.623601ms] Jun 5 00:47:57.556: INFO: Created: latency-svc-l7c4k Jun 5 00:47:57.611: INFO: Got endpoints: latency-svc-l7c4k [796.347936ms] Jun 5 00:47:57.634: INFO: Created: latency-svc-9n4bx Jun 5 00:47:57.645: INFO: Got endpoints: latency-svc-9n4bx [793.941229ms] Jun 5 00:47:57.660: INFO: Created: latency-svc-xqlv8 Jun 5 00:47:57.684: INFO: Got endpoints: latency-svc-xqlv8 [753.992092ms] Jun 5 00:47:57.753: INFO: Created: latency-svc-x6swn Jun 5 00:47:57.797: INFO: Got endpoints: latency-svc-x6swn [811.333655ms] Jun 5 00:47:57.797: INFO: Created: latency-svc-9p98n Jun 5 00:47:57.808: INFO: Got endpoints: latency-svc-9p98n [755.401277ms] Jun 5 00:47:57.838: INFO: Created: latency-svc-tchrr Jun 5 00:47:57.850: INFO: Got endpoints: latency-svc-tchrr [761.093432ms] Jun 5 00:47:57.909: INFO: Created: latency-svc-rd5zh Jun 5 00:47:57.915: INFO: Got endpoints: latency-svc-rd5zh [784.201025ms] Jun 5 00:47:57.936: INFO: Created: latency-svc-h9xdx Jun 5 00:47:57.952: INFO: Got endpoints: latency-svc-h9xdx [728.985966ms] Jun 5 00:47:57.976: INFO: Created: latency-svc-2gz48 Jun 5 00:47:57.988: INFO: Got endpoints: latency-svc-2gz48 [748.55336ms] Jun 5 00:47:58.052: INFO: Created: latency-svc-sqkj9 Jun 5 00:47:58.057: INFO: Got endpoints: latency-svc-sqkj9 [787.050875ms] Jun 5 00:47:58.124: INFO: Created: latency-svc-f4bhz Jun 5 00:47:58.152: INFO: Created: latency-svc-lhlq4 Jun 5 00:47:58.202: INFO: Got endpoints: latency-svc-f4bhz [876.529873ms] Jun 5 00:47:58.202: INFO: Got endpoints: latency-svc-lhlq4 [822.893676ms] Jun 5 00:47:58.228: INFO: Created: latency-svc-s6tpl Jun 5 00:47:58.254: INFO: Got endpoints: latency-svc-s6tpl [794.399637ms] Jun 5 00:47:58.352: INFO: Created: latency-svc-rgqdc Jun 5 00:47:58.380: INFO: Got endpoints: latency-svc-rgqdc [881.443903ms] Jun 5 00:47:58.380: INFO: Created: latency-svc-2m6mg Jun 5 00:47:58.392: INFO: Got endpoints: latency-svc-2m6mg [855.490023ms] Jun 5 00:47:58.410: INFO: Created: latency-svc-zvcnn Jun 5 00:47:58.422: INFO: Got endpoints: latency-svc-zvcnn [810.942835ms] Jun 5 00:47:58.438: INFO: Created: latency-svc-n8gqn Jun 5 00:47:58.489: INFO: Got endpoints: latency-svc-n8gqn [844.23576ms] Jun 5 00:47:58.491: INFO: Created: latency-svc-mp8lk Jun 5 00:47:58.516: INFO: Got endpoints: latency-svc-mp8lk [832.248374ms] Jun 5 00:47:58.542: INFO: Created: latency-svc-p567m Jun 5 00:47:58.555: INFO: Got endpoints: latency-svc-p567m [758.485597ms] Jun 5 00:47:58.571: INFO: Created: latency-svc-7krmv Jun 5 00:47:58.640: INFO: Got endpoints: latency-svc-7krmv [831.609797ms] Jun 5 00:47:58.660: INFO: Created: latency-svc-ztspg Jun 5 00:47:58.670: INFO: Got endpoints: latency-svc-ztspg [819.191395ms] Jun 5 00:47:58.710: INFO: Created: latency-svc-97w6p Jun 5 00:47:58.788: INFO: Got endpoints: latency-svc-97w6p [872.831562ms] Jun 5 00:47:58.804: INFO: Created: latency-svc-lzjnj Jun 5 00:47:58.817: INFO: Got endpoints: latency-svc-lzjnj [865.212046ms] Jun 5 00:47:58.842: INFO: Created: latency-svc-b4gmw Jun 5 00:47:58.864: INFO: Got endpoints: latency-svc-b4gmw [875.685883ms] Jun 5 00:47:58.950: INFO: Created: latency-svc-qc5sd Jun 5 00:47:58.962: INFO: Got endpoints: latency-svc-qc5sd [904.64373ms] Jun 5 00:47:58.998: INFO: Created: latency-svc-xl52l Jun 5 00:47:59.010: INFO: Got endpoints: latency-svc-xl52l [808.388186ms] Jun 5 00:47:59.032: INFO: Created: latency-svc-xr8cx Jun 5 00:47:59.047: INFO: Got endpoints: latency-svc-xr8cx [844.948663ms] Jun 5 00:47:59.100: INFO: Created: latency-svc-8rbsm Jun 5 00:47:59.124: INFO: Got endpoints: latency-svc-8rbsm [870.171625ms] Jun 5 00:47:59.154: INFO: Created: latency-svc-2w55m Jun 5 00:47:59.167: INFO: Got endpoints: latency-svc-2w55m [787.467619ms] Jun 5 00:47:59.188: INFO: Created: latency-svc-gbn7p Jun 5 00:47:59.244: INFO: Got endpoints: latency-svc-gbn7p [852.568372ms] Jun 5 00:47:59.279: INFO: Created: latency-svc-vzmtf Jun 5 00:47:59.294: INFO: Got endpoints: latency-svc-vzmtf [871.584766ms] Jun 5 00:47:59.315: INFO: Created: latency-svc-mx6gk Jun 5 00:47:59.330: INFO: Got endpoints: latency-svc-mx6gk [840.94642ms] Jun 5 00:47:59.375: INFO: Created: latency-svc-qfdnj Jun 5 00:47:59.378: INFO: Got endpoints: latency-svc-qfdnj [861.538426ms] Jun 5 00:47:59.441: INFO: Created: latency-svc-cdxcs Jun 5 00:47:59.461: INFO: Got endpoints: latency-svc-cdxcs [906.042043ms] Jun 5 00:47:59.519: INFO: Created: latency-svc-drxxc Jun 5 00:47:59.550: INFO: Got endpoints: latency-svc-drxxc [910.197856ms] Jun 5 00:47:59.587: INFO: Created: latency-svc-l6tnq Jun 5 00:47:59.595: INFO: Got endpoints: latency-svc-l6tnq [925.00813ms] Jun 5 00:47:59.616: INFO: Created: latency-svc-wwz72 Jun 5 00:47:59.657: INFO: Got endpoints: latency-svc-wwz72 [868.737161ms] Jun 5 00:47:59.659: INFO: Created: latency-svc-2dxp9 Jun 5 00:47:59.681: INFO: Got endpoints: latency-svc-2dxp9 [863.02662ms] Jun 5 00:47:59.711: INFO: Created: latency-svc-6cgz4 Jun 5 00:47:59.722: INFO: Got endpoints: latency-svc-6cgz4 [857.434137ms] Jun 5 00:47:59.742: INFO: Created: latency-svc-rf26t Jun 5 00:47:59.819: INFO: Got endpoints: latency-svc-rf26t [857.052765ms] Jun 5 00:47:59.820: INFO: Created: latency-svc-xqq8r Jun 5 00:47:59.843: INFO: Got endpoints: latency-svc-xqq8r [832.488926ms] Jun 5 00:47:59.967: INFO: Created: latency-svc-zklhs Jun 5 00:47:59.982: INFO: Got endpoints: latency-svc-zklhs [935.07933ms] Jun 5 00:48:00.006: INFO: Created: latency-svc-hrdc6 Jun 5 00:48:00.017: INFO: Got endpoints: latency-svc-hrdc6 [893.351234ms] Jun 5 00:48:00.060: INFO: Created: latency-svc-rp486 Jun 5 00:48:00.094: INFO: Got endpoints: latency-svc-rp486 [926.82274ms] Jun 5 00:48:00.100: INFO: Created: latency-svc-rq89z Jun 5 00:48:00.113: INFO: Got endpoints: latency-svc-rq89z [868.973813ms] Jun 5 00:48:00.136: INFO: Created: latency-svc-4vvqx Jun 5 00:48:00.162: INFO: Got endpoints: latency-svc-4vvqx [868.225287ms] Jun 5 00:48:00.192: INFO: Created: latency-svc-7b8wc Jun 5 00:48:00.244: INFO: Got endpoints: latency-svc-7b8wc [913.580562ms] Jun 5 00:48:00.252: INFO: Created: latency-svc-jj6qf Jun 5 00:48:00.267: INFO: Got endpoints: latency-svc-jj6qf [889.701194ms] Jun 5 00:48:00.287: INFO: Created: latency-svc-txb8v Jun 5 00:48:00.301: INFO: Got endpoints: latency-svc-txb8v [839.855866ms] Jun 5 00:48:00.328: INFO: Created: latency-svc-cg5ff Jun 5 00:48:00.388: INFO: Got endpoints: latency-svc-cg5ff [837.734414ms] Jun 5 00:48:00.401: INFO: Created: latency-svc-lvm2r Jun 5 00:48:00.415: INFO: Got endpoints: latency-svc-lvm2r [820.405678ms] Jun 5 00:48:00.451: INFO: Created: latency-svc-f4xp4 Jun 5 00:48:00.464: INFO: Got endpoints: latency-svc-f4xp4 [806.481524ms] Jun 5 00:48:00.519: INFO: Created: latency-svc-clphd Jun 5 00:48:00.523: INFO: Got endpoints: latency-svc-clphd [842.159441ms] Jun 5 00:48:00.587: INFO: Created: latency-svc-khkgt Jun 5 00:48:00.602: INFO: Got endpoints: latency-svc-khkgt [880.595337ms] Jun 5 00:48:00.664: INFO: Created: latency-svc-zjrdv Jun 5 00:48:00.666: INFO: Got endpoints: latency-svc-zjrdv [847.245565ms] Jun 5 00:48:00.702: INFO: Created: latency-svc-kn595 Jun 5 00:48:00.713: INFO: Got endpoints: latency-svc-kn595 [870.689287ms] Jun 5 00:48:00.730: INFO: Created: latency-svc-zwbfm Jun 5 00:48:00.754: INFO: Got endpoints: latency-svc-zwbfm [772.121344ms] Jun 5 00:48:00.813: INFO: Created: latency-svc-kqkbn Jun 5 00:48:00.822: INFO: Got endpoints: latency-svc-kqkbn [804.609112ms] Jun 5 00:48:00.839: INFO: Created: latency-svc-njqd4 Jun 5 00:48:00.864: INFO: Got endpoints: latency-svc-njqd4 [769.419116ms] Jun 5 00:48:00.900: INFO: Created: latency-svc-tzxt4 Jun 5 00:48:00.969: INFO: Got endpoints: latency-svc-tzxt4 [855.283201ms] Jun 5 00:48:00.972: INFO: Created: latency-svc-vvh5c Jun 5 00:48:00.979: INFO: Got endpoints: latency-svc-vvh5c [816.610347ms] Jun 5 00:48:01.026: INFO: Created: latency-svc-97lsx Jun 5 00:48:01.039: INFO: Got endpoints: latency-svc-97lsx [795.248279ms] Jun 5 00:48:01.106: INFO: Created: latency-svc-tcx6q Jun 5 00:48:01.122: INFO: Got endpoints: latency-svc-tcx6q [854.32649ms] Jun 5 00:48:01.163: INFO: Created: latency-svc-5w5nk Jun 5 00:48:01.178: INFO: Got endpoints: latency-svc-5w5nk [877.054718ms] Jun 5 00:48:01.256: INFO: Created: latency-svc-hnq69 Jun 5 00:48:01.259: INFO: Got endpoints: latency-svc-hnq69 [871.325605ms] Jun 5 00:48:01.288: INFO: Created: latency-svc-jd9pp Jun 5 00:48:01.298: INFO: Got endpoints: latency-svc-jd9pp [882.816261ms] Jun 5 00:48:01.298: INFO: Latencies: [102.349941ms 136.941253ms 142.957834ms 212.591354ms 230.784991ms 284.112194ms 404.492178ms 441.360078ms 511.335954ms 555.19137ms 686.03854ms 720.983535ms 725.271037ms 728.985966ms 748.55336ms 753.992092ms 755.401277ms 757.623601ms 758.485597ms 761.093432ms 763.598721ms 769.419116ms 772.121344ms 779.072537ms 780.475467ms 784.201025ms 785.029259ms 787.050875ms 787.467619ms 793.640512ms 793.941229ms 794.399637ms 795.248279ms 796.203044ms 796.347936ms 803.066035ms 804.609112ms 806.481524ms 808.256829ms 808.388186ms 810.942835ms 811.333655ms 813.695763ms 816.610347ms 819.191395ms 820.405678ms 822.893676ms 828.261664ms 829.028443ms 831.609797ms 832.248374ms 832.488926ms 837.734414ms 839.855866ms 839.882531ms 840.94642ms 842.159441ms 844.23576ms 844.948663ms 846.172786ms 846.921757ms 847.245565ms 849.748425ms 852.568372ms 852.829215ms 854.32649ms 855.283201ms 855.490023ms 855.750644ms 857.052765ms 857.434137ms 858.654723ms 859.772845ms 860.419754ms 861.538426ms 862.155776ms 863.02662ms 863.393955ms 864.162587ms 865.212046ms 866.56041ms 868.225287ms 868.737161ms 868.973813ms 870.171625ms 870.689287ms 871.325605ms 871.584766ms 871.632462ms 872.831562ms 875.685883ms 876.529873ms 877.054718ms 877.765412ms 879.087962ms 879.888272ms 880.595337ms 881.443903ms 882.816261ms 885.966932ms 886.794511ms 887.78207ms 889.701194ms 890.664542ms 893.351234ms 893.842174ms 894.996623ms 898.214655ms 904.461134ms 904.64373ms 905.825922ms 906.042043ms 908.860921ms 910.197856ms 910.529521ms 912.005201ms 913.580562ms 921.66369ms 924.095683ms 924.463902ms 924.620327ms 925.00813ms 925.533957ms 926.82274ms 928.019173ms 929.223226ms 929.258765ms 932.88191ms 935.07933ms 935.118555ms 938.728911ms 940.25323ms 941.230384ms 943.395945ms 949.487192ms 950.769414ms 951.890327ms 961.696841ms 964.875994ms 969.096025ms 969.866626ms 970.806145ms 972.8072ms 975.586623ms 975.694695ms 985.495117ms 987.979188ms 988.997807ms 998.136144ms 1.002627191s 1.00415855s 1.007117236s 1.007464733s 1.012453863s 1.012803174s 1.013415033s 1.013593231s 1.014348987s 1.016663611s 1.023995003s 1.032881098s 1.035734901s 1.041243255s 1.044639598s 1.045565874s 1.051425284s 1.052863278s 1.058898239s 1.060316392s 1.064405499s 1.070129691s 1.090695449s 1.093989517s 1.096609443s 1.09722248s 1.098340943s 1.09855117s 1.101679977s 1.113204049s 1.114108483s 1.114731314s 1.115495561s 1.11580956s 1.11735292s 1.123777474s 1.125217734s 1.129702852s 1.13579664s 1.136407234s 1.138295839s 1.143737789s 1.143880466s 1.144172013s 1.146506999s 1.146698552s 1.159127724s 1.163879367s 1.165058761s 1.195066301s 1.23052433s] Jun 5 00:48:01.298: INFO: 50 %ile: 886.794511ms Jun 5 00:48:01.298: INFO: 90 %ile: 1.114731314s Jun 5 00:48:01.298: INFO: 99 %ile: 1.195066301s Jun 5 00:48:01.298: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:48:01.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3255" for this suite. • [SLOW TEST:16.675 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":232,"skipped":3806,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:48:01.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:48:01.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8821" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":233,"skipped":3812,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:48:01.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e277816a-d501-4917-a951-aa024a25e62e STEP: Creating a pod to test consume configMaps Jun 5 00:48:01.569: INFO: Waiting up to 5m0s for pod "pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6" in namespace "configmap-6308" to be "Succeeded or Failed" Jun 5 00:48:01.574: INFO: Pod "pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.617071ms Jun 5 00:48:03.578: INFO: Pod "pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009186763s Jun 5 00:48:05.582: INFO: Pod "pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6": Phase="Running", Reason="", readiness=true. Elapsed: 4.01304151s Jun 5 00:48:07.589: INFO: Pod "pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019893835s STEP: Saw pod success Jun 5 00:48:07.589: INFO: Pod "pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6" satisfied condition "Succeeded or Failed" Jun 5 00:48:07.595: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6 container configmap-volume-test: STEP: delete the pod Jun 5 00:48:07.655: INFO: Waiting for pod pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6 to disappear Jun 5 00:48:07.660: INFO: Pod pod-configmaps-e55759d6-9322-475c-a3e6-6b0500796df6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:48:07.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6308" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":234,"skipped":3822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:48:07.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:49:07.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5730" for this suite. • [SLOW TEST:60.089 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":3852,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:49:07.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-88dee997-d60b-4847-a62e-66ea129b9025 STEP: Creating a pod to test consume secrets Jun 5 00:49:07.882: INFO: Waiting up to 5m0s for pod "pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e" in namespace "secrets-7588" to be "Succeeded or Failed" Jun 5 00:49:07.895: INFO: Pod "pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.371229ms Jun 5 00:49:09.899: INFO: Pod "pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016922776s Jun 5 00:49:11.903: INFO: Pod "pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020518563s STEP: Saw pod success Jun 5 00:49:11.903: INFO: Pod "pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e" satisfied condition "Succeeded or Failed" Jun 5 00:49:11.906: INFO: Trying to get logs from node latest-worker pod pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e container secret-env-test: STEP: delete the pod Jun 5 00:49:11.922: INFO: Waiting for pod pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e to disappear Jun 5 00:49:11.926: INFO: Pod pod-secrets-2220ba46-27c6-4c80-8adb-189e3ff5255e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:49:11.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7588" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3863,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:49:11.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 5 00:49:20.131: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 5 00:49:20.151: INFO: Pod pod-with-prestop-exec-hook still exists Jun 5 00:49:22.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 5 00:49:22.155: INFO: Pod pod-with-prestop-exec-hook still exists Jun 5 00:49:24.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 5 00:49:24.156: INFO: Pod pod-with-prestop-exec-hook still exists Jun 5 00:49:26.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 5 00:49:26.156: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:49:26.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6852" for this suite. • [SLOW TEST:14.237 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3868,"failed":0} SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:49:26.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 in namespace container-probe-7626 Jun 5 00:49:30.267: INFO: Started pod liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 in namespace container-probe-7626 STEP: checking the pod's current state and verifying that restartCount is present Jun 5 00:49:30.270: INFO: Initial restart count of pod liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 is 0 Jun 5 00:49:44.326: INFO: Restart count of pod container-probe-7626/liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 is now 1 (14.056204792s elapsed) Jun 5 00:50:04.393: INFO: Restart count of pod container-probe-7626/liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 is now 2 (34.122741148s elapsed) Jun 5 00:50:24.501: INFO: Restart count of pod container-probe-7626/liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 is now 3 (54.230987612s elapsed) Jun 5 00:50:44.545: INFO: Restart count of pod container-probe-7626/liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 is now 4 (1m14.274307154s elapsed) Jun 5 00:51:52.738: INFO: Restart count of pod container-probe-7626/liveness-7eed9402-272e-4819-9657-2ad43aab2ac7 is now 5 (2m22.467417063s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:51:52.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7626" for this suite. • [SLOW TEST:146.590 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":3870,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:51:52.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:51:52.823: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 5 00:51:55.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1437 create -f -' Jun 5 00:51:58.707: INFO: stderr: "" Jun 5 00:51:58.707: INFO: stdout: "e2e-test-crd-publish-openapi-8205-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 5 00:51:58.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1437 delete e2e-test-crd-publish-openapi-8205-crds test-cr' Jun 5 00:51:58.821: INFO: stderr: "" Jun 5 00:51:58.821: INFO: stdout: "e2e-test-crd-publish-openapi-8205-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 5 00:51:58.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1437 apply -f -' Jun 5 00:51:59.109: INFO: stderr: "" Jun 5 00:51:59.109: INFO: stdout: "e2e-test-crd-publish-openapi-8205-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 5 00:51:59.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1437 delete e2e-test-crd-publish-openapi-8205-crds test-cr' Jun 5 00:51:59.241: INFO: stderr: "" Jun 5 00:51:59.241: INFO: stdout: "e2e-test-crd-publish-openapi-8205-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 5 00:51:59.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8205-crds' Jun 5 00:51:59.487: INFO: stderr: "" Jun 5 00:51:59.487: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8205-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:02.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1437" for this suite. • [SLOW TEST:9.665 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":239,"skipped":3870,"failed":0} SSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:02.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:02.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8887" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":240,"skipped":3874,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:02.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:52:02.580: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55" in namespace "projected-5557" to be "Succeeded or Failed" Jun 5 00:52:02.600: INFO: Pod "downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55": Phase="Pending", Reason="", readiness=false. Elapsed: 19.782835ms Jun 5 00:52:04.604: INFO: Pod "downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024209507s Jun 5 00:52:06.608: INFO: Pod "downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028552462s STEP: Saw pod success Jun 5 00:52:06.608: INFO: Pod "downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55" satisfied condition "Succeeded or Failed" Jun 5 00:52:06.611: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55 container client-container: STEP: delete the pod Jun 5 00:52:06.781: INFO: Waiting for pod downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55 to disappear Jun 5 00:52:06.846: INFO: Pod downwardapi-volume-9c74c602-6aa4-4af6-ab9c-a14ef5109f55 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:06.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5557" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":3875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:06.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:52:07.000: INFO: Waiting up to 5m0s for pod "busybox-user-65534-53ba847a-7b03-4e01-9c10-104df53d8391" in namespace "security-context-test-8480" to be "Succeeded or Failed" Jun 5 00:52:07.023: INFO: Pod "busybox-user-65534-53ba847a-7b03-4e01-9c10-104df53d8391": Phase="Pending", Reason="", readiness=false. Elapsed: 22.824484ms Jun 5 00:52:09.028: INFO: Pod "busybox-user-65534-53ba847a-7b03-4e01-9c10-104df53d8391": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027495353s Jun 5 00:52:11.032: INFO: Pod "busybox-user-65534-53ba847a-7b03-4e01-9c10-104df53d8391": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031873886s Jun 5 00:52:13.037: INFO: Pod "busybox-user-65534-53ba847a-7b03-4e01-9c10-104df53d8391": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03676039s Jun 5 00:52:13.037: INFO: Pod "busybox-user-65534-53ba847a-7b03-4e01-9c10-104df53d8391" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:13.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8480" for this suite. • [SLOW TEST:6.201 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":242,"skipped":3901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:13.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Jun 5 00:52:13.134: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jun 5 00:52:13.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3824' Jun 5 00:52:13.462: INFO: stderr: "" Jun 5 00:52:13.462: INFO: stdout: "service/agnhost-slave created\n" Jun 5 00:52:13.462: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jun 5 00:52:13.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3824' Jun 5 00:52:13.760: INFO: stderr: "" Jun 5 00:52:13.760: INFO: stdout: "service/agnhost-master created\n" Jun 5 00:52:13.760: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 5 00:52:13.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3824' Jun 5 00:52:14.077: INFO: stderr: "" Jun 5 00:52:14.077: INFO: stdout: "service/frontend created\n" Jun 5 00:52:14.077: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 5 00:52:14.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3824' Jun 5 00:52:14.321: INFO: stderr: "" Jun 5 00:52:14.321: INFO: stdout: "deployment.apps/frontend created\n" Jun 5 00:52:14.322: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 5 00:52:14.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3824' Jun 5 00:52:14.646: INFO: stderr: "" Jun 5 00:52:14.647: INFO: stdout: "deployment.apps/agnhost-master created\n" Jun 5 00:52:14.647: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 5 00:52:14.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3824' Jun 5 00:52:14.895: INFO: stderr: "" Jun 5 00:52:14.895: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jun 5 00:52:14.895: INFO: Waiting for all frontend pods to be Running. Jun 5 00:52:24.945: INFO: Waiting for frontend to serve content. Jun 5 00:52:24.957: INFO: Trying to add a new entry to the guestbook. Jun 5 00:52:24.968: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 5 00:52:24.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3824' Jun 5 00:52:25.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:52:25.155: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jun 5 00:52:25.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3824' Jun 5 00:52:25.336: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:52:25.336: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 5 00:52:25.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3824' Jun 5 00:52:25.467: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:52:25.467: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 5 00:52:25.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3824' Jun 5 00:52:25.566: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:52:25.566: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 5 00:52:25.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3824' Jun 5 00:52:25.687: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:52:25.687: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 5 00:52:25.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3824' Jun 5 00:52:26.124: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:52:26.124: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:26.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3824" for this suite. • [SLOW TEST:13.110 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":243,"skipped":3925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:26.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-4f7d7772-6ca8-438f-86b3-ca1dafc00c3b STEP: Creating a pod to test consume secrets Jun 5 00:52:26.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55" in namespace "projected-1417" to be "Succeeded or Failed" Jun 5 00:52:26.888: INFO: Pod "pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55": Phase="Pending", Reason="", readiness=false. Elapsed: 208.0611ms Jun 5 00:52:28.893: INFO: Pod "pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213568729s Jun 5 00:52:30.898: INFO: Pod "pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218153758s Jun 5 00:52:32.902: INFO: Pod "pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.222540244s STEP: Saw pod success Jun 5 00:52:32.902: INFO: Pod "pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55" satisfied condition "Succeeded or Failed" Jun 5 00:52:32.905: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55 container projected-secret-volume-test: STEP: delete the pod Jun 5 00:52:32.973: INFO: Waiting for pod pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55 to disappear Jun 5 00:52:32.979: INFO: Pod pod-projected-secrets-75287cc1-276b-43f6-b113-0174982fef55 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:32.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1417" for this suite. • [SLOW TEST:6.818 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":244,"skipped":3964,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:32.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jun 5 00:52:33.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9397' Jun 5 00:52:33.325: INFO: stderr: "" Jun 5 00:52:33.325: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 5 00:52:33.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9397' Jun 5 00:52:33.495: INFO: stderr: "" Jun 5 00:52:33.495: INFO: stdout: "update-demo-nautilus-2wxrs update-demo-nautilus-chxvk " Jun 5 00:52:33.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:33.588: INFO: stderr: "" Jun 5 00:52:33.588: INFO: stdout: "" Jun 5 00:52:33.588: INFO: update-demo-nautilus-2wxrs is created but not running Jun 5 00:52:38.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9397' Jun 5 00:52:38.698: INFO: stderr: "" Jun 5 00:52:38.698: INFO: stdout: "update-demo-nautilus-2wxrs update-demo-nautilus-chxvk " Jun 5 00:52:38.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:38.823: INFO: stderr: "" Jun 5 00:52:38.823: INFO: stdout: "true" Jun 5 00:52:38.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:38.926: INFO: stderr: "" Jun 5 00:52:38.926: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 5 00:52:38.926: INFO: validating pod update-demo-nautilus-2wxrs Jun 5 00:52:38.930: INFO: got data: { "image": "nautilus.jpg" } Jun 5 00:52:38.931: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 5 00:52:38.931: INFO: update-demo-nautilus-2wxrs is verified up and running Jun 5 00:52:38.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chxvk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:39.043: INFO: stderr: "" Jun 5 00:52:39.043: INFO: stdout: "true" Jun 5 00:52:39.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chxvk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:39.138: INFO: stderr: "" Jun 5 00:52:39.138: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 5 00:52:39.138: INFO: validating pod update-demo-nautilus-chxvk Jun 5 00:52:39.143: INFO: got data: { "image": "nautilus.jpg" } Jun 5 00:52:39.143: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 5 00:52:39.143: INFO: update-demo-nautilus-chxvk is verified up and running STEP: scaling down the replication controller Jun 5 00:52:39.145: INFO: scanned /root for discovery docs: Jun 5 00:52:39.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9397' Jun 5 00:52:40.326: INFO: stderr: "" Jun 5 00:52:40.326: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 5 00:52:40.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9397' Jun 5 00:52:40.443: INFO: stderr: "" Jun 5 00:52:40.443: INFO: stdout: "update-demo-nautilus-2wxrs update-demo-nautilus-chxvk " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 5 00:52:45.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9397' Jun 5 00:52:45.549: INFO: stderr: "" Jun 5 00:52:45.549: INFO: stdout: "update-demo-nautilus-2wxrs " Jun 5 00:52:45.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:45.648: INFO: stderr: "" Jun 5 00:52:45.648: INFO: stdout: "true" Jun 5 00:52:45.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:45.747: INFO: stderr: "" Jun 5 00:52:45.747: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 5 00:52:45.747: INFO: validating pod update-demo-nautilus-2wxrs Jun 5 00:52:45.750: INFO: got data: { "image": "nautilus.jpg" } Jun 5 00:52:45.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 5 00:52:45.750: INFO: update-demo-nautilus-2wxrs is verified up and running STEP: scaling up the replication controller Jun 5 00:52:45.752: INFO: scanned /root for discovery docs: Jun 5 00:52:45.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9397' Jun 5 00:52:46.898: INFO: stderr: "" Jun 5 00:52:46.898: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 5 00:52:46.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9397' Jun 5 00:52:47.038: INFO: stderr: "" Jun 5 00:52:47.038: INFO: stdout: "update-demo-nautilus-2wxrs update-demo-nautilus-tw7cr " Jun 5 00:52:47.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:47.215: INFO: stderr: "" Jun 5 00:52:47.215: INFO: stdout: "true" Jun 5 00:52:47.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:47.310: INFO: stderr: "" Jun 5 00:52:47.310: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 5 00:52:47.310: INFO: validating pod update-demo-nautilus-2wxrs Jun 5 00:52:47.313: INFO: got data: { "image": "nautilus.jpg" } Jun 5 00:52:47.313: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 5 00:52:47.313: INFO: update-demo-nautilus-2wxrs is verified up and running Jun 5 00:52:47.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tw7cr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:47.421: INFO: stderr: "" Jun 5 00:52:47.421: INFO: stdout: "" Jun 5 00:52:47.421: INFO: update-demo-nautilus-tw7cr is created but not running Jun 5 00:52:52.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9397' Jun 5 00:52:52.522: INFO: stderr: "" Jun 5 00:52:52.522: INFO: stdout: "update-demo-nautilus-2wxrs update-demo-nautilus-tw7cr " Jun 5 00:52:52.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:52.662: INFO: stderr: "" Jun 5 00:52:52.662: INFO: stdout: "true" Jun 5 00:52:52.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wxrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:52.763: INFO: stderr: "" Jun 5 00:52:52.763: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 5 00:52:52.763: INFO: validating pod update-demo-nautilus-2wxrs Jun 5 00:52:52.767: INFO: got data: { "image": "nautilus.jpg" } Jun 5 00:52:52.767: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 5 00:52:52.767: INFO: update-demo-nautilus-2wxrs is verified up and running Jun 5 00:52:52.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tw7cr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:52.864: INFO: stderr: "" Jun 5 00:52:52.864: INFO: stdout: "true" Jun 5 00:52:52.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tw7cr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9397' Jun 5 00:52:52.984: INFO: stderr: "" Jun 5 00:52:52.984: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 5 00:52:52.984: INFO: validating pod update-demo-nautilus-tw7cr Jun 5 00:52:52.989: INFO: got data: { "image": "nautilus.jpg" } Jun 5 00:52:52.989: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 5 00:52:52.989: INFO: update-demo-nautilus-tw7cr is verified up and running STEP: using delete to clean up resources Jun 5 00:52:52.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9397' Jun 5 00:52:53.100: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 5 00:52:53.100: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 5 00:52:53.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9397' Jun 5 00:52:53.223: INFO: stderr: "No resources found in kubectl-9397 namespace.\n" Jun 5 00:52:53.224: INFO: stdout: "" Jun 5 00:52:53.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9397 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 5 00:52:53.361: INFO: stderr: "" Jun 5 00:52:53.361: INFO: stdout: "update-demo-nautilus-2wxrs\nupdate-demo-nautilus-tw7cr\n" Jun 5 00:52:53.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9397' Jun 5 00:52:53.966: INFO: stderr: "No resources found in kubectl-9397 namespace.\n" Jun 5 00:52:53.966: INFO: stdout: "" Jun 5 00:52:53.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9397 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 5 00:52:54.069: INFO: stderr: "" Jun 5 00:52:54.069: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:54.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9397" for this suite. • [SLOW TEST:21.089 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":245,"skipped":3973,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:54.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0605 00:52:55.930671 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 5 00:52:55.930: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:52:55.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7252" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":246,"skipped":3975,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:52:55.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:52:56.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a" in namespace "projected-1195" to be "Succeeded or Failed" Jun 5 00:52:56.073: INFO: Pod "downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.941695ms Jun 5 00:52:58.077: INFO: Pod "downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041212616s Jun 5 00:53:00.082: INFO: Pod "downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a": Phase="Running", Reason="", readiness=true. Elapsed: 4.045402499s Jun 5 00:53:02.199: INFO: Pod "downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162606974s STEP: Saw pod success Jun 5 00:53:02.199: INFO: Pod "downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a" satisfied condition "Succeeded or Failed" Jun 5 00:53:02.201: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a container client-container: STEP: delete the pod Jun 5 00:53:02.277: INFO: Waiting for pod downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a to disappear Jun 5 00:53:02.354: INFO: Pod downwardapi-volume-6824d509-ca1b-43e2-ab8f-b6f5b11b030a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:53:02.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1195" for this suite. • [SLOW TEST:6.426 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":3983,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:53:02.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:53:02.768: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"53976950-c3fc-4de9-ac90-5cfba584d632", Controller:(*bool)(0xc004f315b2), BlockOwnerDeletion:(*bool)(0xc004f315b3)}} Jun 5 00:53:02.835: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"cbf0f5b7-ca79-4907-9d4c-714785e1b990", Controller:(*bool)(0xc004e27a3a), BlockOwnerDeletion:(*bool)(0xc004e27a3b)}} Jun 5 00:53:02.857: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ab7f9b50-6b56-499a-9989-c09248fc2df1", Controller:(*bool)(0xc004df3a3a), BlockOwnerDeletion:(*bool)(0xc004df3a3b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:53:07.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8609" for this suite. • [SLOW TEST:5.551 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":248,"skipped":3988,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:53:07.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7844.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7844.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:53:14.048: INFO: DNS probes using dns-test-f931252d-dbc5-435b-a04a-5896ea765cf6 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7844.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7844.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:53:22.179: INFO: File wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:22.182: INFO: File jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains '' instead of 'bar.example.com.' Jun 5 00:53:22.182: INFO: Lookups using dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b failed for: [wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local] Jun 5 00:53:27.194: INFO: File wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:27.197: INFO: File jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:27.197: INFO: Lookups using dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b failed for: [wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local] Jun 5 00:53:32.974: INFO: File wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:33.022: INFO: File jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:33.022: INFO: Lookups using dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b failed for: [wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local] Jun 5 00:53:37.242: INFO: File wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:37.246: INFO: File jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:37.246: INFO: Lookups using dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b failed for: [wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local] Jun 5 00:53:42.187: INFO: File wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:42.191: INFO: File jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local from pod dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 5 00:53:42.191: INFO: Lookups using dns-7844/dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b failed for: [wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local] Jun 5 00:53:47.205: INFO: DNS probes using dns-test-bbce7f47-392d-4278-80a3-edeb7944e23b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7844.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7844.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7844.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7844.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:53:57.866: INFO: DNS probes using dns-test-06958f5e-493f-40c4-bdfe-692aaad2a340 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:53:58.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7844" for this suite. • [SLOW TEST:51.013 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":249,"skipped":4003,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:53:58.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:54:03.163: INFO: Waiting up to 5m0s for pod "client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293" in namespace "pods-8061" to be "Succeeded or Failed" Jun 5 00:54:03.186: INFO: Pod "client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293": Phase="Pending", Reason="", readiness=false. Elapsed: 22.630102ms Jun 5 00:54:05.253: INFO: Pod "client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089853669s Jun 5 00:54:07.257: INFO: Pod "client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09354171s STEP: Saw pod success Jun 5 00:54:07.257: INFO: Pod "client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293" satisfied condition "Succeeded or Failed" Jun 5 00:54:07.260: INFO: Trying to get logs from node latest-worker2 pod client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293 container env3cont: STEP: delete the pod Jun 5 00:54:07.301: INFO: Waiting for pod client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293 to disappear Jun 5 00:54:07.307: INFO: Pod client-envvars-0cd5d3d4-4da7-4122-bae5-f56802ef7293 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:54:07.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8061" for this suite. • [SLOW TEST:8.385 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:54:07.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 00:54:07.457: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 5 00:54:12.460: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 5 00:54:12.460: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 5 00:54:14.465: INFO: Creating deployment "test-rollover-deployment" Jun 5 00:54:14.480: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 5 00:54:16.486: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 5 00:54:16.491: INFO: Ensure that both replica sets have 1 created replica Jun 5 00:54:16.496: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 5 00:54:16.502: INFO: Updating deployment test-rollover-deployment Jun 5 00:54:16.502: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 5 00:54:18.508: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 5 00:54:18.514: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 5 00:54:18.535: INFO: all replica sets need to contain the pod-template-hash label Jun 5 00:54:18.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915256, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:54:20.544: INFO: all replica sets need to contain the pod-template-hash label Jun 5 00:54:20.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:54:22.545: INFO: all replica sets need to contain the pod-template-hash label Jun 5 00:54:22.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:54:24.571: INFO: all replica sets need to contain the pod-template-hash label Jun 5 00:54:24.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:54:26.543: INFO: all replica sets need to contain the pod-template-hash label Jun 5 00:54:26.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:54:28.543: INFO: all replica sets need to contain the pod-template-hash label Jun 5 00:54:28.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915254, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:54:30.700: INFO: Jun 5 00:54:30.700: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 5 00:54:30.711: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7095 /apis/apps/v1/namespaces/deployment-7095/deployments/test-rollover-deployment 903c0b9c-e104-4894-b298-de44e9cacbcd 10349394 2 2020-06-05 00:54:14 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-05 00:54:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-05 00:54:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d3f828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-05 00:54:14 +0000 UTC,LastTransitionTime:2020-06-05 00:54:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-06-05 00:54:30 +0000 UTC,LastTransitionTime:2020-06-05 00:54:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 5 00:54:30.714: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-7095 /apis/apps/v1/namespaces/deployment-7095/replicasets/test-rollover-deployment-7c4fd9c879 aaeb18bf-473a-4057-9bb4-643ef2524b8d 10349383 2 2020-06-05 00:54:16 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 903c0b9c-e104-4894-b298-de44e9cacbcd 0xc004df2d37 0xc004df2d38}] [] [{kube-controller-manager Update apps/v1 2020-06-05 00:54:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"903c0b9c-e104-4894-b298-de44e9cacbcd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004df2dc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 5 00:54:30.714: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 5 00:54:30.714: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7095 /apis/apps/v1/namespaces/deployment-7095/replicasets/test-rollover-controller 675808ed-a663-4e33-9c90-f14164add5fc 10349392 2 2020-06-05 00:54:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 903c0b9c-e104-4894-b298-de44e9cacbcd 0xc004df2b1f 0xc004df2b30}] [] [{e2e.test Update apps/v1 2020-06-05 00:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-05 00:54:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"903c0b9c-e104-4894-b298-de44e9cacbcd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004df2bc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 5 00:54:30.714: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-7095 /apis/apps/v1/namespaces/deployment-7095/replicasets/test-rollover-deployment-5686c4cfd5 ee63ffce-4143-42d0-84e4-6bec2766d292 10349335 2 2020-06-05 00:54:14 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 903c0b9c-e104-4894-b298-de44e9cacbcd 0xc004df2c37 0xc004df2c38}] [] [{kube-controller-manager Update apps/v1 2020-06-05 00:54:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"903c0b9c-e104-4894-b298-de44e9cacbcd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004df2cc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 5 00:54:30.716: INFO: Pod "test-rollover-deployment-7c4fd9c879-h5ksn" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-h5ksn test-rollover-deployment-7c4fd9c879- deployment-7095 /api/v1/namespaces/deployment-7095/pods/test-rollover-deployment-7c4fd9c879-h5ksn e9b78262-81a3-4678-88ac-2fdb527b56bc 10349350 0 2020-06-05 00:54:16 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 aaeb18bf-473a-4057-9bb4-643ef2524b8d 0xc004df3377 0xc004df3378}] [] [{kube-controller-manager Update v1 2020-06-05 00:54:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aaeb18bf-473a-4057-9bb4-643ef2524b8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 00:54:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.241\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jxg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jxg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jxg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:54:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:54:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:54:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 00:54:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.241,StartTime:2020-06-05 00:54:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 00:54:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://93c3db4ca7d025a8c8c9db6bfabbd6eb9cd306f77b31eb303101ac88255f2f51,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:54:30.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7095" for this suite. • [SLOW TEST:23.408 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":251,"skipped":4055,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:54:30.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4556 STEP: creating replication controller nodeport-test in namespace services-4556 I0605 00:54:30.888231 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4556, replica count: 2 I0605 00:54:33.938745 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0605 00:54:36.939015 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 5 00:54:36.939: INFO: Creating new exec pod Jun 5 00:54:41.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4556 execpod7j8hv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jun 5 00:54:42.208: INFO: stderr: "I0605 00:54:42.132660 3777 log.go:172] (0xc000bfe000) (0xc000238500) Create stream\nI0605 00:54:42.132728 3777 log.go:172] (0xc000bfe000) (0xc000238500) Stream added, broadcasting: 1\nI0605 00:54:42.135135 3777 log.go:172] (0xc000bfe000) Reply frame received for 1\nI0605 00:54:42.135171 3777 log.go:172] (0xc000bfe000) (0xc000238b40) Create stream\nI0605 00:54:42.135184 3777 log.go:172] (0xc000bfe000) (0xc000238b40) Stream added, broadcasting: 3\nI0605 00:54:42.136186 3777 log.go:172] (0xc000bfe000) Reply frame received for 3\nI0605 00:54:42.136236 3777 log.go:172] (0xc000bfe000) (0xc0002390e0) Create stream\nI0605 00:54:42.136256 3777 log.go:172] (0xc000bfe000) (0xc0002390e0) Stream added, broadcasting: 5\nI0605 00:54:42.137348 3777 log.go:172] (0xc000bfe000) Reply frame received for 5\nI0605 00:54:42.199110 3777 log.go:172] (0xc000bfe000) Data frame received for 5\nI0605 00:54:42.199154 3777 log.go:172] (0xc0002390e0) (5) Data frame handling\nI0605 00:54:42.199177 3777 log.go:172] (0xc0002390e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0605 00:54:42.199513 3777 log.go:172] (0xc000bfe000) Data frame received for 5\nI0605 00:54:42.199543 3777 log.go:172] (0xc0002390e0) (5) Data frame handling\nI0605 00:54:42.199559 3777 log.go:172] (0xc0002390e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0605 00:54:42.199819 3777 log.go:172] (0xc000bfe000) Data frame received for 3\nI0605 00:54:42.199900 3777 log.go:172] (0xc000238b40) (3) Data frame handling\nI0605 00:54:42.200018 3777 log.go:172] (0xc000bfe000) Data frame received for 5\nI0605 00:54:42.200047 3777 log.go:172] (0xc0002390e0) (5) Data frame handling\nI0605 00:54:42.201994 3777 log.go:172] (0xc000bfe000) Data frame received for 1\nI0605 00:54:42.202021 3777 log.go:172] (0xc000238500) (1) Data frame handling\nI0605 00:54:42.202037 3777 log.go:172] (0xc000238500) (1) Data frame sent\nI0605 00:54:42.202063 3777 log.go:172] (0xc000bfe000) (0xc000238500) Stream removed, broadcasting: 1\nI0605 00:54:42.202095 3777 log.go:172] (0xc000bfe000) Go away received\nI0605 00:54:42.202427 3777 log.go:172] (0xc000bfe000) (0xc000238500) Stream removed, broadcasting: 1\nI0605 00:54:42.202449 3777 log.go:172] (0xc000bfe000) (0xc000238b40) Stream removed, broadcasting: 3\nI0605 00:54:42.202459 3777 log.go:172] (0xc000bfe000) (0xc0002390e0) Stream removed, broadcasting: 5\n" Jun 5 00:54:42.208: INFO: stdout: "" Jun 5 00:54:42.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4556 execpod7j8hv -- /bin/sh -x -c nc -zv -t -w 2 10.101.253.10 80' Jun 5 00:54:42.430: INFO: stderr: "I0605 00:54:42.356050 3797 log.go:172] (0xc0009c6840) (0xc00016b9a0) Create stream\nI0605 00:54:42.356121 3797 log.go:172] (0xc0009c6840) (0xc00016b9a0) Stream added, broadcasting: 1\nI0605 00:54:42.359241 3797 log.go:172] (0xc0009c6840) Reply frame received for 1\nI0605 00:54:42.359299 3797 log.go:172] (0xc0009c6840) (0xc00059ca00) Create stream\nI0605 00:54:42.359313 3797 log.go:172] (0xc0009c6840) (0xc00059ca00) Stream added, broadcasting: 3\nI0605 00:54:42.360242 3797 log.go:172] (0xc0009c6840) Reply frame received for 3\nI0605 00:54:42.360280 3797 log.go:172] (0xc0009c6840) (0xc0006405a0) Create stream\nI0605 00:54:42.360293 3797 log.go:172] (0xc0009c6840) (0xc0006405a0) Stream added, broadcasting: 5\nI0605 00:54:42.361346 3797 log.go:172] (0xc0009c6840) Reply frame received for 5\nI0605 00:54:42.424456 3797 log.go:172] (0xc0009c6840) Data frame received for 3\nI0605 00:54:42.424485 3797 log.go:172] (0xc00059ca00) (3) Data frame handling\nI0605 00:54:42.424516 3797 log.go:172] (0xc0009c6840) Data frame received for 5\nI0605 00:54:42.424528 3797 log.go:172] (0xc0006405a0) (5) Data frame handling\nI0605 00:54:42.424538 3797 log.go:172] (0xc0006405a0) (5) Data frame sent\nI0605 00:54:42.424547 3797 log.go:172] (0xc0009c6840) Data frame received for 5\nI0605 00:54:42.424554 3797 log.go:172] (0xc0006405a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.253.10 80\nConnection to 10.101.253.10 80 port [tcp/http] succeeded!\nI0605 00:54:42.425716 3797 log.go:172] (0xc0009c6840) Data frame received for 1\nI0605 00:54:42.425731 3797 log.go:172] (0xc00016b9a0) (1) Data frame handling\nI0605 00:54:42.425743 3797 log.go:172] (0xc00016b9a0) (1) Data frame sent\nI0605 00:54:42.425756 3797 log.go:172] (0xc0009c6840) (0xc00016b9a0) Stream removed, broadcasting: 1\nI0605 00:54:42.425886 3797 log.go:172] (0xc0009c6840) Go away received\nI0605 00:54:42.426042 3797 log.go:172] (0xc0009c6840) (0xc00016b9a0) Stream removed, broadcasting: 1\nI0605 00:54:42.426058 3797 log.go:172] (0xc0009c6840) (0xc00059ca00) Stream removed, broadcasting: 3\nI0605 00:54:42.426067 3797 log.go:172] (0xc0009c6840) (0xc0006405a0) Stream removed, broadcasting: 5\n" Jun 5 00:54:42.430: INFO: stdout: "" Jun 5 00:54:42.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4556 execpod7j8hv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31712' Jun 5 00:54:42.650: INFO: stderr: "I0605 00:54:42.568912 3820 log.go:172] (0xc00003a6e0) (0xc00051cc80) Create stream\nI0605 00:54:42.568981 3820 log.go:172] (0xc00003a6e0) (0xc00051cc80) Stream added, broadcasting: 1\nI0605 00:54:42.572662 3820 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0605 00:54:42.572692 3820 log.go:172] (0xc00003a6e0) (0xc00013b680) Create stream\nI0605 00:54:42.572702 3820 log.go:172] (0xc00003a6e0) (0xc00013b680) Stream added, broadcasting: 3\nI0605 00:54:42.573680 3820 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0605 00:54:42.573700 3820 log.go:172] (0xc00003a6e0) (0xc00039c820) Create stream\nI0605 00:54:42.573707 3820 log.go:172] (0xc00003a6e0) (0xc00039c820) Stream added, broadcasting: 5\nI0605 00:54:42.574523 3820 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0605 00:54:42.642369 3820 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0605 00:54:42.642407 3820 log.go:172] (0xc00013b680) (3) Data frame handling\nI0605 00:54:42.642679 3820 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0605 00:54:42.642703 3820 log.go:172] (0xc00039c820) (5) Data frame handling\nI0605 00:54:42.642712 3820 log.go:172] (0xc00039c820) (5) Data frame sent\nI0605 00:54:42.642724 3820 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0605 00:54:42.642729 3820 log.go:172] (0xc00039c820) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31712\nConnection to 172.17.0.13 31712 port [tcp/31712] succeeded!\nI0605 00:54:42.644451 3820 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0605 00:54:42.644485 3820 log.go:172] (0xc00051cc80) (1) Data frame handling\nI0605 00:54:42.644508 3820 log.go:172] (0xc00051cc80) (1) Data frame sent\nI0605 00:54:42.644524 3820 log.go:172] (0xc00003a6e0) (0xc00051cc80) Stream removed, broadcasting: 1\nI0605 00:54:42.644558 3820 log.go:172] (0xc00003a6e0) Go away received\nI0605 00:54:42.644935 3820 log.go:172] (0xc00003a6e0) (0xc00051cc80) Stream removed, broadcasting: 1\nI0605 00:54:42.644959 3820 log.go:172] (0xc00003a6e0) (0xc00013b680) Stream removed, broadcasting: 3\nI0605 00:54:42.644976 3820 log.go:172] (0xc00003a6e0) (0xc00039c820) Stream removed, broadcasting: 5\n" Jun 5 00:54:42.650: INFO: stdout: "" Jun 5 00:54:42.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4556 execpod7j8hv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31712' Jun 5 00:54:42.866: INFO: stderr: "I0605 00:54:42.779891 3843 log.go:172] (0xc0009ed340) (0xc000b5a320) Create stream\nI0605 00:54:42.779934 3843 log.go:172] (0xc0009ed340) (0xc000b5a320) Stream added, broadcasting: 1\nI0605 00:54:42.784630 3843 log.go:172] (0xc0009ed340) Reply frame received for 1\nI0605 00:54:42.784677 3843 log.go:172] (0xc0009ed340) (0xc0006485a0) Create stream\nI0605 00:54:42.784694 3843 log.go:172] (0xc0009ed340) (0xc0006485a0) Stream added, broadcasting: 3\nI0605 00:54:42.785956 3843 log.go:172] (0xc0009ed340) Reply frame received for 3\nI0605 00:54:42.786010 3843 log.go:172] (0xc0009ed340) (0xc0005605a0) Create stream\nI0605 00:54:42.786022 3843 log.go:172] (0xc0009ed340) (0xc0005605a0) Stream added, broadcasting: 5\nI0605 00:54:42.786929 3843 log.go:172] (0xc0009ed340) Reply frame received for 5\nI0605 00:54:42.858125 3843 log.go:172] (0xc0009ed340) Data frame received for 3\nI0605 00:54:42.858150 3843 log.go:172] (0xc0006485a0) (3) Data frame handling\nI0605 00:54:42.858209 3843 log.go:172] (0xc0009ed340) Data frame received for 5\nI0605 00:54:42.858244 3843 log.go:172] (0xc0005605a0) (5) Data frame handling\nI0605 00:54:42.858312 3843 log.go:172] (0xc0005605a0) (5) Data frame sent\nI0605 00:54:42.858328 3843 log.go:172] (0xc0009ed340) Data frame received for 5\nI0605 00:54:42.858339 3843 log.go:172] (0xc0005605a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31712\nConnection to 172.17.0.12 31712 port [tcp/31712] succeeded!\nI0605 00:54:42.859914 3843 log.go:172] (0xc0009ed340) Data frame received for 1\nI0605 00:54:42.859946 3843 log.go:172] (0xc000b5a320) (1) Data frame handling\nI0605 00:54:42.859977 3843 log.go:172] (0xc000b5a320) (1) Data frame sent\nI0605 00:54:42.860012 3843 log.go:172] (0xc0009ed340) (0xc000b5a320) Stream removed, broadcasting: 1\nI0605 00:54:42.860040 3843 log.go:172] (0xc0009ed340) Go away received\nI0605 00:54:42.860490 3843 log.go:172] (0xc0009ed340) (0xc000b5a320) Stream removed, broadcasting: 1\nI0605 00:54:42.860526 3843 log.go:172] (0xc0009ed340) (0xc0006485a0) Stream removed, broadcasting: 3\nI0605 00:54:42.860544 3843 log.go:172] (0xc0009ed340) (0xc0005605a0) Stream removed, broadcasting: 5\n" Jun 5 00:54:42.866: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:54:42.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4556" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.152 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":252,"skipped":4059,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:54:42.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-8j2g STEP: Creating a pod to test atomic-volume-subpath Jun 5 00:54:42.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8j2g" in namespace "subpath-9423" to be "Succeeded or Failed" Jun 5 00:54:42.975: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Pending", Reason="", readiness=false. Elapsed: 11.587789ms Jun 5 00:54:45.014: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051263938s Jun 5 00:54:47.019: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 4.055667797s Jun 5 00:54:49.022: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 6.058800435s Jun 5 00:54:51.025: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 8.062311329s Jun 5 00:54:53.030: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 10.067153174s Jun 5 00:54:55.035: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 12.071585875s Jun 5 00:54:57.039: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 14.076209158s Jun 5 00:54:59.043: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 16.080343595s Jun 5 00:55:01.048: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 18.084649605s Jun 5 00:55:03.057: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 20.094303085s Jun 5 00:55:05.062: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Running", Reason="", readiness=true. Elapsed: 22.09902492s Jun 5 00:55:07.066: INFO: Pod "pod-subpath-test-projected-8j2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.103175326s STEP: Saw pod success Jun 5 00:55:07.066: INFO: Pod "pod-subpath-test-projected-8j2g" satisfied condition "Succeeded or Failed" Jun 5 00:55:07.070: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-8j2g container test-container-subpath-projected-8j2g: STEP: delete the pod Jun 5 00:55:07.095: INFO: Waiting for pod pod-subpath-test-projected-8j2g to disappear Jun 5 00:55:07.121: INFO: Pod pod-subpath-test-projected-8j2g no longer exists STEP: Deleting pod pod-subpath-test-projected-8j2g Jun 5 00:55:07.121: INFO: Deleting pod "pod-subpath-test-projected-8j2g" in namespace "subpath-9423" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:55:07.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9423" for this suite. • [SLOW TEST:24.262 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":253,"skipped":4073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:55:07.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:55:07.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2172" for this suite. STEP: Destroying namespace "nspatchtest-31fe5a9b-b9bf-451d-9dea-8630d606d45c-5803" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":254,"skipped":4113,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:55:07.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:55:07.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711" in namespace "projected-3859" to be "Succeeded or Failed" Jun 5 00:55:07.345: INFO: Pod "downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711": Phase="Pending", Reason="", readiness=false. Elapsed: 3.312846ms Jun 5 00:55:09.348: INFO: Pod "downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007041278s Jun 5 00:55:11.353: INFO: Pod "downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011771418s STEP: Saw pod success Jun 5 00:55:11.353: INFO: Pod "downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711" satisfied condition "Succeeded or Failed" Jun 5 00:55:11.356: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711 container client-container: STEP: delete the pod Jun 5 00:55:11.408: INFO: Waiting for pod downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711 to disappear Jun 5 00:55:11.410: INFO: Pod downwardapi-volume-b8d100f3-0cce-410e-8840-90f319f59711 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:55:11.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3859" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4113,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:55:11.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Jun 5 00:55:11.540: INFO: Waiting up to 5m0s for pod "client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb" in namespace "containers-8602" to be "Succeeded or Failed" Jun 5 00:55:11.542: INFO: Pod "client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.450223ms Jun 5 00:55:13.547: INFO: Pod "client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006644984s Jun 5 00:55:15.551: INFO: Pod "client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011083458s STEP: Saw pod success Jun 5 00:55:15.551: INFO: Pod "client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb" satisfied condition "Succeeded or Failed" Jun 5 00:55:15.554: INFO: Trying to get logs from node latest-worker2 pod client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb container test-container: STEP: delete the pod Jun 5 00:55:15.590: INFO: Waiting for pod client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb to disappear Jun 5 00:55:15.612: INFO: Pod client-containers-b87a163a-ddb9-4355-8c83-691fad1777cb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:55:15.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8602" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4121,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:55:15.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 5 00:55:19.774: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:55:19.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6863" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4137,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:55:19.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5334 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jun 5 00:55:19.931: INFO: Found 0 stateful pods, waiting for 3 Jun 5 00:55:29.936: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:55:29.936: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:55:29.936: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 5 00:55:39.939: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:55:39.939: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:55:39.939: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 5 00:55:40.031: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 5 00:55:50.082: INFO: Updating stateful set ss2 Jun 5 00:55:50.106: INFO: Waiting for Pod statefulset-5334/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jun 5 00:56:00.726: INFO: Found 2 stateful pods, waiting for 3 Jun 5 00:56:10.732: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:56:10.732: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:56:10.732: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 5 00:56:10.757: INFO: Updating stateful set ss2 Jun 5 00:56:10.841: INFO: Waiting for Pod statefulset-5334/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 5 00:56:20.867: INFO: Updating stateful set ss2 Jun 5 00:56:20.931: INFO: Waiting for StatefulSet statefulset-5334/ss2 to complete update Jun 5 00:56:20.932: INFO: Waiting for Pod statefulset-5334/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 5 00:56:30.940: INFO: Deleting all statefulset in ns statefulset-5334 Jun 5 00:56:30.943: INFO: Scaling statefulset ss2 to 0 Jun 5 00:57:00.978: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:57:00.982: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:57:01.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5334" for this suite. • [SLOW TEST:101.181 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":258,"skipped":4145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:57:01.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4e5c3f63-600a-45b0-9af3-f69f23cac35c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4e5c3f63-600a-45b0-9af3-f69f23cac35c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:57:09.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2930" for this suite. • [SLOW TEST:8.140 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4230,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:57:09.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:57:09.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a" in namespace "downward-api-9911" to be "Succeeded or Failed" Jun 5 00:57:09.268: INFO: Pod "downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.59378ms Jun 5 00:57:11.271: INFO: Pod "downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022969793s Jun 5 00:57:13.275: INFO: Pod "downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027287387s STEP: Saw pod success Jun 5 00:57:13.275: INFO: Pod "downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a" satisfied condition "Succeeded or Failed" Jun 5 00:57:13.279: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a container client-container: STEP: delete the pod Jun 5 00:57:13.373: INFO: Waiting for pod downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a to disappear Jun 5 00:57:13.378: INFO: Pod downwardapi-volume-adf042ae-33c9-4e39-bdf0-916d5413408a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:57:13.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9911" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":260,"skipped":4242,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:57:13.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-bdg7 STEP: Creating a pod to test atomic-volume-subpath Jun 5 00:57:13.506: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bdg7" in namespace "subpath-4916" to be "Succeeded or Failed" Jun 5 00:57:13.510: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.985238ms Jun 5 00:57:15.515: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008738107s Jun 5 00:57:17.523: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 4.016956074s Jun 5 00:57:19.527: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 6.021086031s Jun 5 00:57:21.532: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 8.025465859s Jun 5 00:57:23.536: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 10.029993356s Jun 5 00:57:25.541: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 12.034931749s Jun 5 00:57:28.728: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 15.221850807s Jun 5 00:57:30.982: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 17.475561318s Jun 5 00:57:32.985: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 19.479254948s Jun 5 00:57:34.989: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 21.483134083s Jun 5 00:57:36.993: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Running", Reason="", readiness=true. Elapsed: 23.487187606s Jun 5 00:57:38.997: INFO: Pod "pod-subpath-test-secret-bdg7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.491317732s STEP: Saw pod success Jun 5 00:57:38.997: INFO: Pod "pod-subpath-test-secret-bdg7" satisfied condition "Succeeded or Failed" Jun 5 00:57:39.001: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-bdg7 container test-container-subpath-secret-bdg7: STEP: delete the pod Jun 5 00:57:39.064: INFO: Waiting for pod pod-subpath-test-secret-bdg7 to disappear Jun 5 00:57:39.086: INFO: Pod pod-subpath-test-secret-bdg7 no longer exists STEP: Deleting pod pod-subpath-test-secret-bdg7 Jun 5 00:57:39.086: INFO: Deleting pod "pod-subpath-test-secret-bdg7" in namespace "subpath-4916" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:57:39.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4916" for this suite. • [SLOW TEST:25.677 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":261,"skipped":4250,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:57:39.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8052.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8052.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:57:45.356: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.360: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.364: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.367: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.375: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.379: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.386: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.392: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:45.398: INFO: Lookups using dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local] Jun 5 00:57:50.403: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.407: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.411: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.414: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.425: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.428: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.430: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.433: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:50.439: INFO: Lookups using dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local] Jun 5 00:57:55.403: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.406: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.408: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.411: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.419: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.421: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.424: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.427: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:57:55.434: INFO: Lookups using dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local] Jun 5 00:58:00.403: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.407: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.411: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.415: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.425: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.428: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.431: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.435: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:00.442: INFO: Lookups using dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local] Jun 5 00:58:05.402: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.406: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.408: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.412: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.421: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.424: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.427: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.430: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:05.436: INFO: Lookups using dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local] Jun 5 00:58:10.404: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.408: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.412: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.416: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.427: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.430: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.437: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local from pod dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd: the server could not find the requested resource (get pods dns-test-d3793918-3412-40a0-81ac-65e576137fdd) Jun 5 00:58:10.443: INFO: Lookups using dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8052.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8052.svc.cluster.local jessie_udp@dns-test-service-2.dns-8052.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8052.svc.cluster.local] Jun 5 00:58:15.439: INFO: DNS probes using dns-8052/dns-test-d3793918-3412-40a0-81ac-65e576137fdd succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:58:16.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8052" for this suite. • [SLOW TEST:37.220 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":262,"skipped":4257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:58:16.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 00:58:17.206: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 00:58:19.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 5 00:58:21.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915497, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 00:58:24.309: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 5 00:58:24.334: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:58:24.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8306" for this suite. STEP: Destroying namespace "webhook-8306-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.229 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":263,"skipped":4313,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:58:24.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-896 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-896 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-896 Jun 5 00:58:24.709: INFO: Found 0 stateful pods, waiting for 1 Jun 5 00:58:34.728: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 5 00:58:34.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-896 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:58:34.978: INFO: stderr: "I0605 00:58:34.861796 3863 log.go:172] (0xc000c01080) (0xc000ac4280) Create stream\nI0605 00:58:34.861837 3863 log.go:172] (0xc000c01080) (0xc000ac4280) Stream added, broadcasting: 1\nI0605 00:58:34.867468 3863 log.go:172] (0xc000c01080) Reply frame received for 1\nI0605 00:58:34.867516 3863 log.go:172] (0xc000c01080) (0xc00058cdc0) Create stream\nI0605 00:58:34.867531 3863 log.go:172] (0xc000c01080) (0xc00058cdc0) Stream added, broadcasting: 3\nI0605 00:58:34.868447 3863 log.go:172] (0xc000c01080) Reply frame received for 3\nI0605 00:58:34.868476 3863 log.go:172] (0xc000c01080) (0xc000766d20) Create stream\nI0605 00:58:34.868485 3863 log.go:172] (0xc000c01080) (0xc000766d20) Stream added, broadcasting: 5\nI0605 00:58:34.869459 3863 log.go:172] (0xc000c01080) Reply frame received for 5\nI0605 00:58:34.936165 3863 log.go:172] (0xc000c01080) Data frame received for 5\nI0605 00:58:34.936197 3863 log.go:172] (0xc000766d20) (5) Data frame handling\nI0605 00:58:34.936218 3863 log.go:172] (0xc000766d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:58:34.970084 3863 log.go:172] (0xc000c01080) Data frame received for 3\nI0605 00:58:34.970143 3863 log.go:172] (0xc00058cdc0) (3) Data frame handling\nI0605 00:58:34.970183 3863 log.go:172] (0xc00058cdc0) (3) Data frame sent\nI0605 00:58:34.970211 3863 log.go:172] (0xc000c01080) Data frame received for 3\nI0605 00:58:34.970235 3863 log.go:172] (0xc00058cdc0) (3) Data frame handling\nI0605 00:58:34.970385 3863 log.go:172] (0xc000c01080) Data frame received for 5\nI0605 00:58:34.970437 3863 log.go:172] (0xc000766d20) (5) Data frame handling\nI0605 00:58:34.972147 3863 log.go:172] (0xc000c01080) Data frame received for 1\nI0605 00:58:34.972193 3863 log.go:172] (0xc000ac4280) (1) Data frame handling\nI0605 00:58:34.972246 3863 log.go:172] (0xc000ac4280) (1) Data frame sent\nI0605 00:58:34.972296 3863 log.go:172] (0xc000c01080) (0xc000ac4280) Stream removed, broadcasting: 1\nI0605 00:58:34.972346 3863 log.go:172] (0xc000c01080) Go away received\nI0605 00:58:34.972649 3863 log.go:172] (0xc000c01080) (0xc000ac4280) Stream removed, broadcasting: 1\nI0605 00:58:34.972662 3863 log.go:172] (0xc000c01080) (0xc00058cdc0) Stream removed, broadcasting: 3\nI0605 00:58:34.972668 3863 log.go:172] (0xc000c01080) (0xc000766d20) Stream removed, broadcasting: 5\n" Jun 5 00:58:34.978: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:58:34.978: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:58:34.983: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 5 00:58:44.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:58:44.988: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:58:45.004: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:58:45.004: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:58:45.004: INFO: Jun 5 00:58:45.004: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 5 00:58:46.010: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994374669s Jun 5 00:58:47.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98862746s Jun 5 00:58:48.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977936999s Jun 5 00:58:49.135: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.869076978s Jun 5 00:58:50.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.864267569s Jun 5 00:58:51.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.859867845s Jun 5 00:58:52.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.855644056s Jun 5 00:58:53.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.849745267s Jun 5 00:58:54.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 832.791015ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-896 Jun 5 00:58:55.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-896 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:58:55.412: INFO: stderr: "I0605 00:58:55.325501 3883 log.go:172] (0xc000b4cd10) (0xc0007819a0) Create stream\nI0605 00:58:55.325567 3883 log.go:172] (0xc000b4cd10) (0xc0007819a0) Stream added, broadcasting: 1\nI0605 00:58:55.328144 3883 log.go:172] (0xc000b4cd10) Reply frame received for 1\nI0605 00:58:55.328188 3883 log.go:172] (0xc000b4cd10) (0xc0006f8960) Create stream\nI0605 00:58:55.328198 3883 log.go:172] (0xc000b4cd10) (0xc0006f8960) Stream added, broadcasting: 3\nI0605 00:58:55.328988 3883 log.go:172] (0xc000b4cd10) Reply frame received for 3\nI0605 00:58:55.329016 3883 log.go:172] (0xc000b4cd10) (0xc0008e9040) Create stream\nI0605 00:58:55.329023 3883 log.go:172] (0xc000b4cd10) (0xc0008e9040) Stream added, broadcasting: 5\nI0605 00:58:55.330317 3883 log.go:172] (0xc000b4cd10) Reply frame received for 5\nI0605 00:58:55.398925 3883 log.go:172] (0xc000b4cd10) Data frame received for 5\nI0605 00:58:55.398951 3883 log.go:172] (0xc0008e9040) (5) Data frame handling\nI0605 00:58:55.398968 3883 log.go:172] (0xc0008e9040) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0605 00:58:55.401511 3883 log.go:172] (0xc000b4cd10) Data frame received for 3\nI0605 00:58:55.401535 3883 log.go:172] (0xc0006f8960) (3) Data frame handling\nI0605 00:58:55.401552 3883 log.go:172] (0xc0006f8960) (3) Data frame sent\nI0605 00:58:55.402058 3883 log.go:172] (0xc000b4cd10) Data frame received for 5\nI0605 00:58:55.402077 3883 log.go:172] (0xc0008e9040) (5) Data frame handling\nI0605 00:58:55.402096 3883 log.go:172] (0xc000b4cd10) Data frame received for 3\nI0605 00:58:55.402104 3883 log.go:172] (0xc0006f8960) (3) Data frame handling\nI0605 00:58:55.403552 3883 log.go:172] (0xc000b4cd10) Data frame received for 1\nI0605 00:58:55.403575 3883 log.go:172] (0xc0007819a0) (1) Data frame handling\nI0605 00:58:55.403592 3883 log.go:172] (0xc0007819a0) (1) Data frame sent\nI0605 00:58:55.403611 3883 log.go:172] (0xc000b4cd10) (0xc0007819a0) Stream removed, broadcasting: 1\nI0605 00:58:55.403663 3883 log.go:172] (0xc000b4cd10) Go away received\nI0605 00:58:55.404019 3883 log.go:172] (0xc000b4cd10) (0xc0007819a0) Stream removed, broadcasting: 1\nI0605 00:58:55.404037 3883 log.go:172] (0xc000b4cd10) (0xc0006f8960) Stream removed, broadcasting: 3\nI0605 00:58:55.404047 3883 log.go:172] (0xc000b4cd10) (0xc0008e9040) Stream removed, broadcasting: 5\n" Jun 5 00:58:55.412: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:58:55.412: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:58:55.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-896 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:58:55.646: INFO: stderr: "I0605 00:58:55.559684 3903 log.go:172] (0xc0006e4370) (0xc00057d860) Create stream\nI0605 00:58:55.559747 3903 log.go:172] (0xc0006e4370) (0xc00057d860) Stream added, broadcasting: 1\nI0605 00:58:55.562132 3903 log.go:172] (0xc0006e4370) Reply frame received for 1\nI0605 00:58:55.562167 3903 log.go:172] (0xc0006e4370) (0xc000516c80) Create stream\nI0605 00:58:55.562184 3903 log.go:172] (0xc0006e4370) (0xc000516c80) Stream added, broadcasting: 3\nI0605 00:58:55.563285 3903 log.go:172] (0xc0006e4370) Reply frame received for 3\nI0605 00:58:55.563334 3903 log.go:172] (0xc0006e4370) (0xc00030aa00) Create stream\nI0605 00:58:55.563349 3903 log.go:172] (0xc0006e4370) (0xc00030aa00) Stream added, broadcasting: 5\nI0605 00:58:55.564400 3903 log.go:172] (0xc0006e4370) Reply frame received for 5\nI0605 00:58:55.639402 3903 log.go:172] (0xc0006e4370) Data frame received for 3\nI0605 00:58:55.639438 3903 log.go:172] (0xc000516c80) (3) Data frame handling\nI0605 00:58:55.639450 3903 log.go:172] (0xc000516c80) (3) Data frame sent\nI0605 00:58:55.639457 3903 log.go:172] (0xc0006e4370) Data frame received for 3\nI0605 00:58:55.639464 3903 log.go:172] (0xc000516c80) (3) Data frame handling\nI0605 00:58:55.639532 3903 log.go:172] (0xc0006e4370) Data frame received for 5\nI0605 00:58:55.639561 3903 log.go:172] (0xc00030aa00) (5) Data frame handling\nI0605 00:58:55.639578 3903 log.go:172] (0xc00030aa00) (5) Data frame sent\nI0605 00:58:55.639594 3903 log.go:172] (0xc0006e4370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0605 00:58:55.639602 3903 log.go:172] (0xc00030aa00) (5) Data frame handling\nI0605 00:58:55.641026 3903 log.go:172] (0xc0006e4370) Data frame received for 1\nI0605 00:58:55.641051 3903 log.go:172] (0xc00057d860) (1) Data frame handling\nI0605 00:58:55.641065 3903 log.go:172] (0xc00057d860) (1) Data frame sent\nI0605 00:58:55.641084 3903 log.go:172] (0xc0006e4370) (0xc00057d860) Stream removed, broadcasting: 1\nI0605 00:58:55.641107 3903 log.go:172] (0xc0006e4370) Go away received\nI0605 00:58:55.641675 3903 log.go:172] (0xc0006e4370) (0xc00057d860) Stream removed, broadcasting: 1\nI0605 00:58:55.641698 3903 log.go:172] (0xc0006e4370) (0xc000516c80) Stream removed, broadcasting: 3\nI0605 00:58:55.641709 3903 log.go:172] (0xc0006e4370) (0xc00030aa00) Stream removed, broadcasting: 5\n" Jun 5 00:58:55.647: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:58:55.647: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:58:55.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-896 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 5 00:58:55.876: INFO: stderr: "I0605 00:58:55.788340 3922 log.go:172] (0xc0009cb340) (0xc0009a0280) Create stream\nI0605 00:58:55.788382 3922 log.go:172] (0xc0009cb340) (0xc0009a0280) Stream added, broadcasting: 1\nI0605 00:58:55.792284 3922 log.go:172] (0xc0009cb340) Reply frame received for 1\nI0605 00:58:55.792392 3922 log.go:172] (0xc0009cb340) (0xc000412140) Create stream\nI0605 00:58:55.792406 3922 log.go:172] (0xc0009cb340) (0xc000412140) Stream added, broadcasting: 3\nI0605 00:58:55.793277 3922 log.go:172] (0xc0009cb340) Reply frame received for 3\nI0605 00:58:55.793308 3922 log.go:172] (0xc0009cb340) (0xc0004123c0) Create stream\nI0605 00:58:55.793322 3922 log.go:172] (0xc0009cb340) (0xc0004123c0) Stream added, broadcasting: 5\nI0605 00:58:55.794166 3922 log.go:172] (0xc0009cb340) Reply frame received for 5\nI0605 00:58:55.870032 3922 log.go:172] (0xc0009cb340) Data frame received for 5\nI0605 00:58:55.870101 3922 log.go:172] (0xc0004123c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0605 00:58:55.870133 3922 log.go:172] (0xc0009cb340) Data frame received for 3\nI0605 00:58:55.870169 3922 log.go:172] (0xc000412140) (3) Data frame handling\nI0605 00:58:55.870192 3922 log.go:172] (0xc000412140) (3) Data frame sent\nI0605 00:58:55.870306 3922 log.go:172] (0xc0009cb340) Data frame received for 3\nI0605 00:58:55.870331 3922 log.go:172] (0xc000412140) (3) Data frame handling\nI0605 00:58:55.870362 3922 log.go:172] (0xc0004123c0) (5) Data frame sent\nI0605 00:58:55.870388 3922 log.go:172] (0xc0009cb340) Data frame received for 5\nI0605 00:58:55.870405 3922 log.go:172] (0xc0004123c0) (5) Data frame handling\nI0605 00:58:55.871797 3922 log.go:172] (0xc0009cb340) Data frame received for 1\nI0605 00:58:55.871814 3922 log.go:172] (0xc0009a0280) (1) Data frame handling\nI0605 00:58:55.871828 3922 log.go:172] (0xc0009a0280) (1) Data frame sent\nI0605 00:58:55.871867 3922 log.go:172] (0xc0009cb340) (0xc0009a0280) Stream removed, broadcasting: 1\nI0605 00:58:55.871916 3922 log.go:172] (0xc0009cb340) Go away received\nI0605 00:58:55.872188 3922 log.go:172] (0xc0009cb340) (0xc0009a0280) Stream removed, broadcasting: 1\nI0605 00:58:55.872209 3922 log.go:172] (0xc0009cb340) (0xc000412140) Stream removed, broadcasting: 3\nI0605 00:58:55.872219 3922 log.go:172] (0xc0009cb340) (0xc0004123c0) Stream removed, broadcasting: 5\n" Jun 5 00:58:55.876: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 5 00:58:55.876: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 5 00:58:55.880: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 5 00:59:05.886: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:59:05.886: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 5 00:59:05.886: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 5 00:59:05.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-896 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:59:06.130: INFO: stderr: "I0605 00:59:06.023663 3942 log.go:172] (0xc00097a000) (0xc0004d2c80) Create stream\nI0605 00:59:06.023735 3942 log.go:172] (0xc00097a000) (0xc0004d2c80) Stream added, broadcasting: 1\nI0605 00:59:06.027391 3942 log.go:172] (0xc00097a000) Reply frame received for 1\nI0605 00:59:06.027462 3942 log.go:172] (0xc00097a000) (0xc0004d3400) Create stream\nI0605 00:59:06.027491 3942 log.go:172] (0xc00097a000) (0xc0004d3400) Stream added, broadcasting: 3\nI0605 00:59:06.028524 3942 log.go:172] (0xc00097a000) Reply frame received for 3\nI0605 00:59:06.028559 3942 log.go:172] (0xc00097a000) (0xc0006ad0e0) Create stream\nI0605 00:59:06.028571 3942 log.go:172] (0xc00097a000) (0xc0006ad0e0) Stream added, broadcasting: 5\nI0605 00:59:06.029620 3942 log.go:172] (0xc00097a000) Reply frame received for 5\nI0605 00:59:06.116986 3942 log.go:172] (0xc00097a000) Data frame received for 5\nI0605 00:59:06.117017 3942 log.go:172] (0xc0006ad0e0) (5) Data frame handling\nI0605 00:59:06.117041 3942 log.go:172] (0xc0006ad0e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:59:06.119693 3942 log.go:172] (0xc00097a000) Data frame received for 3\nI0605 00:59:06.119744 3942 log.go:172] (0xc0004d3400) (3) Data frame handling\nI0605 00:59:06.119886 3942 log.go:172] (0xc0004d3400) (3) Data frame sent\nI0605 00:59:06.119928 3942 log.go:172] (0xc00097a000) Data frame received for 3\nI0605 00:59:06.119953 3942 log.go:172] (0xc0004d3400) (3) Data frame handling\nI0605 00:59:06.120098 3942 log.go:172] (0xc00097a000) Data frame received for 5\nI0605 00:59:06.120122 3942 log.go:172] (0xc0006ad0e0) (5) Data frame handling\nI0605 00:59:06.122544 3942 log.go:172] (0xc00097a000) Data frame received for 1\nI0605 00:59:06.122579 3942 log.go:172] (0xc0004d2c80) (1) Data frame handling\nI0605 00:59:06.122606 3942 log.go:172] (0xc0004d2c80) (1) Data frame sent\nI0605 00:59:06.122641 3942 log.go:172] (0xc00097a000) (0xc0004d2c80) Stream removed, broadcasting: 1\nI0605 00:59:06.122826 3942 log.go:172] (0xc00097a000) Go away received\nI0605 00:59:06.123198 3942 log.go:172] (0xc00097a000) (0xc0004d2c80) Stream removed, broadcasting: 1\nI0605 00:59:06.123229 3942 log.go:172] (0xc00097a000) (0xc0004d3400) Stream removed, broadcasting: 3\nI0605 00:59:06.123243 3942 log.go:172] (0xc00097a000) (0xc0006ad0e0) Stream removed, broadcasting: 5\n" Jun 5 00:59:06.130: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:59:06.130: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:59:06.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-896 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:59:06.365: INFO: stderr: "I0605 00:59:06.266439 3962 log.go:172] (0xc000ad7340) (0xc000aea280) Create stream\nI0605 00:59:06.266492 3962 log.go:172] (0xc000ad7340) (0xc000aea280) Stream added, broadcasting: 1\nI0605 00:59:06.272243 3962 log.go:172] (0xc000ad7340) Reply frame received for 1\nI0605 00:59:06.272292 3962 log.go:172] (0xc000ad7340) (0xc00058c1e0) Create stream\nI0605 00:59:06.272303 3962 log.go:172] (0xc000ad7340) (0xc00058c1e0) Stream added, broadcasting: 3\nI0605 00:59:06.273326 3962 log.go:172] (0xc000ad7340) Reply frame received for 3\nI0605 00:59:06.273365 3962 log.go:172] (0xc000ad7340) (0xc00058c780) Create stream\nI0605 00:59:06.273372 3962 log.go:172] (0xc000ad7340) (0xc00058c780) Stream added, broadcasting: 5\nI0605 00:59:06.274097 3962 log.go:172] (0xc000ad7340) Reply frame received for 5\nI0605 00:59:06.328719 3962 log.go:172] (0xc000ad7340) Data frame received for 5\nI0605 00:59:06.328752 3962 log.go:172] (0xc00058c780) (5) Data frame handling\nI0605 00:59:06.328772 3962 log.go:172] (0xc00058c780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:59:06.357030 3962 log.go:172] (0xc000ad7340) Data frame received for 3\nI0605 00:59:06.357054 3962 log.go:172] (0xc00058c1e0) (3) Data frame handling\nI0605 00:59:06.357063 3962 log.go:172] (0xc00058c1e0) (3) Data frame sent\nI0605 00:59:06.357673 3962 log.go:172] (0xc000ad7340) Data frame received for 5\nI0605 00:59:06.357696 3962 log.go:172] (0xc00058c780) (5) Data frame handling\nI0605 00:59:06.357747 3962 log.go:172] (0xc000ad7340) Data frame received for 3\nI0605 00:59:06.357796 3962 log.go:172] (0xc00058c1e0) (3) Data frame handling\nI0605 00:59:06.359132 3962 log.go:172] (0xc000ad7340) Data frame received for 1\nI0605 00:59:06.359147 3962 log.go:172] (0xc000aea280) (1) Data frame handling\nI0605 00:59:06.359159 3962 log.go:172] (0xc000aea280) (1) Data frame sent\nI0605 00:59:06.359366 3962 log.go:172] (0xc000ad7340) (0xc000aea280) Stream removed, broadcasting: 1\nI0605 00:59:06.359397 3962 log.go:172] (0xc000ad7340) Go away received\nI0605 00:59:06.359797 3962 log.go:172] (0xc000ad7340) (0xc000aea280) Stream removed, broadcasting: 1\nI0605 00:59:06.359824 3962 log.go:172] (0xc000ad7340) (0xc00058c1e0) Stream removed, broadcasting: 3\nI0605 00:59:06.359838 3962 log.go:172] (0xc000ad7340) (0xc00058c780) Stream removed, broadcasting: 5\n" Jun 5 00:59:06.365: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:59:06.365: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:59:06.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-896 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 5 00:59:06.668: INFO: stderr: "I0605 00:59:06.558113 3982 log.go:172] (0xc000adc000) (0xc000442000) Create stream\nI0605 00:59:06.558183 3982 log.go:172] (0xc000adc000) (0xc000442000) Stream added, broadcasting: 1\nI0605 00:59:06.562340 3982 log.go:172] (0xc000adc000) Reply frame received for 1\nI0605 00:59:06.562444 3982 log.go:172] (0xc000adc000) (0xc000307040) Create stream\nI0605 00:59:06.562473 3982 log.go:172] (0xc000adc000) (0xc000307040) Stream added, broadcasting: 3\nI0605 00:59:06.563671 3982 log.go:172] (0xc000adc000) Reply frame received for 3\nI0605 00:59:06.563711 3982 log.go:172] (0xc000adc000) (0xc000307360) Create stream\nI0605 00:59:06.563725 3982 log.go:172] (0xc000adc000) (0xc000307360) Stream added, broadcasting: 5\nI0605 00:59:06.564972 3982 log.go:172] (0xc000adc000) Reply frame received for 5\nI0605 00:59:06.631367 3982 log.go:172] (0xc000adc000) Data frame received for 5\nI0605 00:59:06.631399 3982 log.go:172] (0xc000307360) (5) Data frame handling\nI0605 00:59:06.631421 3982 log.go:172] (0xc000307360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0605 00:59:06.659646 3982 log.go:172] (0xc000adc000) Data frame received for 3\nI0605 00:59:06.659673 3982 log.go:172] (0xc000307040) (3) Data frame handling\nI0605 00:59:06.659686 3982 log.go:172] (0xc000307040) (3) Data frame sent\nI0605 00:59:06.659751 3982 log.go:172] (0xc000adc000) Data frame received for 5\nI0605 00:59:06.659769 3982 log.go:172] (0xc000307360) (5) Data frame handling\nI0605 00:59:06.660403 3982 log.go:172] (0xc000adc000) Data frame received for 3\nI0605 00:59:06.660423 3982 log.go:172] (0xc000307040) (3) Data frame handling\nI0605 00:59:06.662406 3982 log.go:172] (0xc000adc000) Data frame received for 1\nI0605 00:59:06.662427 3982 log.go:172] (0xc000442000) (1) Data frame handling\nI0605 00:59:06.662444 3982 log.go:172] (0xc000442000) (1) Data frame sent\nI0605 00:59:06.662474 3982 log.go:172] (0xc000adc000) (0xc000442000) Stream removed, broadcasting: 1\nI0605 00:59:06.662602 3982 log.go:172] (0xc000adc000) Go away received\nI0605 00:59:06.662934 3982 log.go:172] (0xc000adc000) (0xc000442000) Stream removed, broadcasting: 1\nI0605 00:59:06.662962 3982 log.go:172] (0xc000adc000) (0xc000307040) Stream removed, broadcasting: 3\nI0605 00:59:06.662974 3982 log.go:172] (0xc000adc000) (0xc000307360) Stream removed, broadcasting: 5\n" Jun 5 00:59:06.668: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 5 00:59:06.668: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 5 00:59:06.668: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:59:06.671: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jun 5 00:59:16.680: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:59:16.680: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:59:16.680: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 5 00:59:16.695: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:16.695: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:16.695: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:16.695: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:16.695: INFO: Jun 5 00:59:16.695: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 5 00:59:17.830: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:17.830: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:17.831: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:17.831: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:17.831: INFO: Jun 5 00:59:17.831: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 5 00:59:18.848: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:18.848: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:18.848: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:18.848: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:18.848: INFO: Jun 5 00:59:18.848: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 5 00:59:19.852: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:19.853: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:19.853: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:19.853: INFO: Jun 5 00:59:19.853: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 5 00:59:20.857: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:20.857: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:20.857: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:20.857: INFO: Jun 5 00:59:20.857: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 5 00:59:21.862: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:21.862: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:21.862: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:21.862: INFO: Jun 5 00:59:21.862: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 5 00:59:22.867: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:22.867: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:22.867: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:22.867: INFO: Jun 5 00:59:22.867: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 5 00:59:23.871: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:23.871: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:23.871: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:45 +0000 UTC }] Jun 5 00:59:23.871: INFO: Jun 5 00:59:23.871: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 5 00:59:24.874: INFO: POD NODE PHASE GRACE CONDITIONS Jun 5 00:59:24.874: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:59:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-05 00:58:24 +0000 UTC }] Jun 5 00:59:24.875: INFO: Jun 5 00:59:24.875: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 5 00:59:25.879: INFO: Verifying statefulset ss doesn't scale past 0 for another 814.259886ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-896 Jun 5 00:59:26.883: INFO: Scaling statefulset ss to 0 Jun 5 00:59:26.894: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 5 00:59:26.897: INFO: Deleting all statefulset in ns statefulset-896 Jun 5 00:59:26.900: INFO: Scaling statefulset ss to 0 Jun 5 00:59:26.908: INFO: Waiting for statefulset status.replicas updated to 0 Jun 5 00:59:26.911: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:59:26.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-896" for this suite. • [SLOW TEST:62.373 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":264,"skipped":4319,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:59:26.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Jun 5 00:59:27.015: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix789967945/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:59:27.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6725" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":265,"skipped":4324,"failed":0} ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:59:27.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-aa3b9775-f0ba-4a0a-ab07-c54be689804b STEP: Creating secret with name s-test-opt-upd-c2dc2bfc-9453-4af7-b80c-4f84ce20f6ec STEP: Creating the pod STEP: Deleting secret s-test-opt-del-aa3b9775-f0ba-4a0a-ab07-c54be689804b STEP: Updating secret s-test-opt-upd-c2dc2bfc-9453-4af7-b80c-4f84ce20f6ec STEP: Creating secret with name s-test-opt-create-6b7799ab-9ad9-489b-ae05-297c772efb75 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:59:37.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1344" for this suite. • [SLOW TEST:10.320 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":266,"skipped":4324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:59:37.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3337.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3337.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3337.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3337.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3337.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3337.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 5 00:59:43.606: INFO: DNS probes using dns-3337/dns-test-6ddd74f8-8b29-489b-84e8-20406fbf5f94 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:59:43.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3337" for this suite. • [SLOW TEST:6.462 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":267,"skipped":4364,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:59:43.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-4a233565-bad6-47f6-9252-b4587b5d4387 STEP: Creating a pod to test consume configMaps Jun 5 00:59:44.397: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e" in namespace "projected-6112" to be "Succeeded or Failed" Jun 5 00:59:44.477: INFO: Pod "pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e": Phase="Pending", Reason="", readiness=false. Elapsed: 79.687456ms Jun 5 00:59:46.482: INFO: Pod "pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084352442s Jun 5 00:59:48.486: INFO: Pod "pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e": Phase="Running", Reason="", readiness=true. Elapsed: 4.08861541s Jun 5 00:59:50.501: INFO: Pod "pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103652712s STEP: Saw pod success Jun 5 00:59:50.501: INFO: Pod "pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e" satisfied condition "Succeeded or Failed" Jun 5 00:59:50.504: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e container projected-configmap-volume-test: STEP: delete the pod Jun 5 00:59:50.536: INFO: Waiting for pod pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e to disappear Jun 5 00:59:50.540: INFO: Pod pod-projected-configmaps-f666ec32-a376-4827-91b4-b974be62e56e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:59:50.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6112" for this suite. • [SLOW TEST:6.650 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4380,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:59:50.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-7ebd98f4-a748-4e76-8dde-58a1bc8ae78d STEP: Creating a pod to test consume configMaps Jun 5 00:59:50.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d" in namespace "configmap-4851" to be "Succeeded or Failed" Jun 5 00:59:50.662: INFO: Pod "pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.561442ms Jun 5 00:59:52.727: INFO: Pod "pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0674348s Jun 5 00:59:54.731: INFO: Pod "pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071398008s STEP: Saw pod success Jun 5 00:59:54.731: INFO: Pod "pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d" satisfied condition "Succeeded or Failed" Jun 5 00:59:54.734: INFO: Trying to get logs from node latest-worker pod pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d container configmap-volume-test: STEP: delete the pod Jun 5 00:59:54.781: INFO: Waiting for pod pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d to disappear Jun 5 00:59:54.796: INFO: Pod pod-configmaps-035cdd5d-6cff-4755-91c8-73c32baabc6d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 00:59:54.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4851" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":269,"skipped":4389,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 00:59:54.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 00:59:54.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1" in namespace "projected-2250" to be "Succeeded or Failed" Jun 5 00:59:54.910: INFO: Pod "downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.935826ms Jun 5 00:59:56.915: INFO: Pod "downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020653868s Jun 5 00:59:58.980: INFO: Pod "downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.086148569s Jun 5 01:00:00.986: INFO: Pod "downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09126295s STEP: Saw pod success Jun 5 01:00:00.986: INFO: Pod "downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1" satisfied condition "Succeeded or Failed" Jun 5 01:00:00.990: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1 container client-container: STEP: delete the pod Jun 5 01:00:01.084: INFO: Waiting for pod downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1 to disappear Jun 5 01:00:01.099: INFO: Pod downwardapi-volume-02682301-0934-47e0-a402-75450718e0f1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:00:01.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2250" for this suite. • [SLOW TEST:6.301 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4393,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:00:01.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 01:00:01.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe" in namespace "downward-api-8888" to be "Succeeded or Failed" Jun 5 01:00:01.299: INFO: Pod "downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.062326ms Jun 5 01:00:03.430: INFO: Pod "downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133789794s Jun 5 01:00:05.434: INFO: Pod "downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138299134s STEP: Saw pod success Jun 5 01:00:05.435: INFO: Pod "downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe" satisfied condition "Succeeded or Failed" Jun 5 01:00:05.438: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe container client-container: STEP: delete the pod Jun 5 01:00:05.474: INFO: Waiting for pod downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe to disappear Jun 5 01:00:05.487: INFO: Pod downwardapi-volume-7b430097-1278-4416-a718-178a74dd0abe no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:00:05.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8888" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":271,"skipped":4401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:00:05.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 01:00:06.262: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 01:00:08.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915606, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915606, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915606, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915606, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 01:00:11.352: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:00:11.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2238" for this suite. STEP: Destroying namespace "webhook-2238-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.079 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":272,"skipped":4434,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:00:11.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 5 01:00:11.713: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 5 01:00:11.729: INFO: Waiting for terminating namespaces to be deleted... Jun 5 01:00:11.731: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 5 01:00:11.736: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.736: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 5 01:00:11.736: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.737: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 5 01:00:11.737: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.737: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 01:00:11.737: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.737: INFO: Container kube-proxy ready: true, restart count 0 Jun 5 01:00:11.737: INFO: sample-webhook-deployment-75dd644756-9m7lf from webhook-2238 started at 2020-06-05 01:00:06 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.737: INFO: Container sample-webhook ready: true, restart count 0 Jun 5 01:00:11.737: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 5 01:00:11.742: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.742: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 5 01:00:11.742: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.742: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 5 01:00:11.742: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.742: INFO: Container kindnet-cni ready: true, restart count 2 Jun 5 01:00:11.742: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 5 01:00:11.742: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-528d87bb-149e-40a0-9d87-77537cdfed96 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-528d87bb-149e-40a0-9d87-77537cdfed96 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-528d87bb-149e-40a0-9d87-77537cdfed96 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:05:19.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6907" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.411 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":273,"skipped":4436,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:05:19.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-dps5 STEP: Creating a pod to test atomic-volume-subpath Jun 5 01:05:20.096: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dps5" in namespace "subpath-5625" to be "Succeeded or Failed" Jun 5 01:05:20.106: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.794936ms Jun 5 01:05:22.253: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156993575s Jun 5 01:05:24.256: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160179091s Jun 5 01:05:26.259: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 6.163436264s Jun 5 01:05:28.263: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 8.167143915s Jun 5 01:05:30.294: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 10.198479713s Jun 5 01:05:32.298: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 12.201958317s Jun 5 01:05:34.330: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 14.234465294s Jun 5 01:05:36.335: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 16.23868268s Jun 5 01:05:38.363: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 18.266588836s Jun 5 01:05:40.374: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 20.278374104s Jun 5 01:05:42.393: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 22.297307032s Jun 5 01:05:44.398: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Running", Reason="", readiness=true. Elapsed: 24.301664559s Jun 5 01:05:46.403: INFO: Pod "pod-subpath-test-configmap-dps5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.306704714s STEP: Saw pod success Jun 5 01:05:46.403: INFO: Pod "pod-subpath-test-configmap-dps5" satisfied condition "Succeeded or Failed" Jun 5 01:05:46.406: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-dps5 container test-container-subpath-configmap-dps5: STEP: delete the pod Jun 5 01:05:46.510: INFO: Waiting for pod pod-subpath-test-configmap-dps5 to disappear Jun 5 01:05:46.514: INFO: Pod pod-subpath-test-configmap-dps5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-dps5 Jun 5 01:05:46.514: INFO: Deleting pod "pod-subpath-test-configmap-dps5" in namespace "subpath-5625" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:05:46.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5625" for this suite. • [SLOW TEST:26.559 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":274,"skipped":4444,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:05:46.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:06:17.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9512" for this suite. • [SLOW TEST:30.851 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4448,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:06:17.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 5 01:06:17.913: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 5 01:06:19.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915977, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915977, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915978, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726915977, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 5 01:06:23.027: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 01:06:23.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8285-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:06:24.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8491" for this suite. STEP: Destroying namespace "webhook-8491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.003 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":276,"skipped":4450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:06:24.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-40c79f97-368a-4d72-ad6b-7a24288c588b STEP: Creating a pod to test consume configMaps Jun 5 01:06:24.505: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45" in namespace "projected-9706" to be "Succeeded or Failed" Jun 5 01:06:24.527: INFO: Pod "pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45": Phase="Pending", Reason="", readiness=false. Elapsed: 22.054738ms Jun 5 01:06:26.532: INFO: Pod "pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026703149s Jun 5 01:06:28.536: INFO: Pod "pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45": Phase="Running", Reason="", readiness=true. Elapsed: 4.030893341s Jun 5 01:06:30.541: INFO: Pod "pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035934208s STEP: Saw pod success Jun 5 01:06:30.541: INFO: Pod "pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45" satisfied condition "Succeeded or Failed" Jun 5 01:06:30.544: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45 container projected-configmap-volume-test: STEP: delete the pod Jun 5 01:06:30.631: INFO: Waiting for pod pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45 to disappear Jun 5 01:06:30.642: INFO: Pod pod-projected-configmaps-733ba4b0-bc79-4c11-8c86-c68312efbd45 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:06:30.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9706" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:06:30.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8a54b46a-6b00-4477-873e-d0c209aec4d9 STEP: Creating a pod to test consume configMaps Jun 5 01:06:30.721: INFO: Waiting up to 5m0s for pod "pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e" in namespace "configmap-1876" to be "Succeeded or Failed" Jun 5 01:06:30.768: INFO: Pod "pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.730055ms Jun 5 01:06:32.772: INFO: Pod "pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050811107s Jun 5 01:06:34.776: INFO: Pod "pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054525346s STEP: Saw pod success Jun 5 01:06:34.776: INFO: Pod "pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e" satisfied condition "Succeeded or Failed" Jun 5 01:06:34.779: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e container configmap-volume-test: STEP: delete the pod Jun 5 01:06:34.862: INFO: Waiting for pod pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e to disappear Jun 5 01:06:34.888: INFO: Pod pod-configmaps-797ee568-b254-4b64-9048-926235e4ce9e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:06:34.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1876" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:06:34.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 01:06:35.159: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 5 01:06:35.171: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:35.175: INFO: Number of nodes with available pods: 0 Jun 5 01:06:35.175: INFO: Node latest-worker is running more than one daemon pod Jun 5 01:06:36.206: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:36.210: INFO: Number of nodes with available pods: 0 Jun 5 01:06:36.210: INFO: Node latest-worker is running more than one daemon pod Jun 5 01:06:37.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:37.183: INFO: Number of nodes with available pods: 0 Jun 5 01:06:37.183: INFO: Node latest-worker is running more than one daemon pod Jun 5 01:06:38.182: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:38.203: INFO: Number of nodes with available pods: 0 Jun 5 01:06:38.203: INFO: Node latest-worker is running more than one daemon pod Jun 5 01:06:39.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:39.184: INFO: Number of nodes with available pods: 1 Jun 5 01:06:39.184: INFO: Node latest-worker2 is running more than one daemon pod Jun 5 01:06:40.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:40.184: INFO: Number of nodes with available pods: 2 Jun 5 01:06:40.184: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 5 01:06:40.265: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:40.265: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:40.306: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:41.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:41.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:41.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:42.397: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:42.397: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:42.403: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:43.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:43.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:43.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:44.313: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:44.313: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:44.313: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:44.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:45.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:45.311: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:45.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:45.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:46.325: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:46.325: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:46.325: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:46.330: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:47.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:47.312: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:47.312: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:47.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:48.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:48.311: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:48.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:48.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:49.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:49.311: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:49.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:49.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:50.343: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:50.343: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:50.343: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:50.347: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:51.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:51.311: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:51.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:51.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:52.315: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:52.315: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:52.315: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:52.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:53.311: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:53.311: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:53.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:53.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:54.312: INFO: Wrong image for pod: daemon-set-7c2tg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:54.312: INFO: Pod daemon-set-7c2tg is not available Jun 5 01:06:54.312: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:54.317: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:55.332: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:55.332: INFO: Pod daemon-set-z67bp is not available Jun 5 01:06:55.439: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:56.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:56.311: INFO: Pod daemon-set-z67bp is not available Jun 5 01:06:56.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:57.312: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:57.312: INFO: Pod daemon-set-z67bp is not available Jun 5 01:06:57.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:58.310: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:58.310: INFO: Pod daemon-set-z67bp is not available Jun 5 01:06:58.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:06:59.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:06:59.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:00.311: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:07:00.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:01.312: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:07:01.312: INFO: Pod daemon-set-c6bxv is not available Jun 5 01:07:01.317: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:02.312: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:07:02.312: INFO: Pod daemon-set-c6bxv is not available Jun 5 01:07:02.317: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:03.312: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:07:03.312: INFO: Pod daemon-set-c6bxv is not available Jun 5 01:07:03.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:04.319: INFO: Wrong image for pod: daemon-set-c6bxv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 5 01:07:04.319: INFO: Pod daemon-set-c6bxv is not available Jun 5 01:07:04.324: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:05.311: INFO: Pod daemon-set-cqr2d is not available Jun 5 01:07:05.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 5 01:07:05.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:05.324: INFO: Number of nodes with available pods: 1 Jun 5 01:07:05.324: INFO: Node latest-worker is running more than one daemon pod Jun 5 01:07:06.329: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:06.333: INFO: Number of nodes with available pods: 1 Jun 5 01:07:06.333: INFO: Node latest-worker is running more than one daemon pod Jun 5 01:07:07.333: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:07.336: INFO: Number of nodes with available pods: 1 Jun 5 01:07:07.336: INFO: Node latest-worker is running more than one daemon pod Jun 5 01:07:08.328: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 5 01:07:08.332: INFO: Number of nodes with available pods: 2 Jun 5 01:07:08.332: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6573, will wait for the garbage collector to delete the pods Jun 5 01:07:08.406: INFO: Deleting DaemonSet.extensions daemon-set took: 5.793157ms Jun 5 01:07:08.806: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.211829ms Jun 5 01:07:14.919: INFO: Number of nodes with available pods: 0 Jun 5 01:07:14.919: INFO: Number of running nodes: 0, number of available pods: 0 Jun 5 01:07:14.922: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6573/daemonsets","resourceVersion":"10353158"},"items":null} Jun 5 01:07:14.940: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6573/pods","resourceVersion":"10353159"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:07:14.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6573" for this suite. • [SLOW TEST:39.999 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":279,"skipped":4537,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:07:14.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jun 5 01:07:15.032: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:07:31.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3438" for this suite. • [SLOW TEST:16.066 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":280,"skipped":4554,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:07:31.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 5 01:07:31.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76" in namespace "downward-api-1290" to be "Succeeded or Failed" Jun 5 01:07:31.128: INFO: Pod "downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76": Phase="Pending", Reason="", readiness=false. Elapsed: 3.004466ms Jun 5 01:07:33.132: INFO: Pod "downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007556479s Jun 5 01:07:35.137: INFO: Pod "downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011632658s STEP: Saw pod success Jun 5 01:07:35.137: INFO: Pod "downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76" satisfied condition "Succeeded or Failed" Jun 5 01:07:35.139: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76 container client-container: STEP: delete the pod Jun 5 01:07:35.170: INFO: Waiting for pod downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76 to disappear Jun 5 01:07:35.182: INFO: Pod downwardapi-volume-fd412981-da0a-4292-84d0-6795d0914e76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:07:35.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1290" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":281,"skipped":4555,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:07:35.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0d9a728b-3aca-4ea9-be43-06613a2f8d45 STEP: Creating a pod to test consume secrets Jun 5 01:07:35.345: INFO: Waiting up to 5m0s for pod "pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68" in namespace "secrets-9755" to be "Succeeded or Failed" Jun 5 01:07:35.357: INFO: Pod "pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68": Phase="Pending", Reason="", readiness=false. Elapsed: 11.916858ms Jun 5 01:07:37.397: INFO: Pod "pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051525279s Jun 5 01:07:39.402: INFO: Pod "pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056841388s STEP: Saw pod success Jun 5 01:07:39.402: INFO: Pod "pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68" satisfied condition "Succeeded or Failed" Jun 5 01:07:39.406: INFO: Trying to get logs from node latest-worker pod pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68 container secret-volume-test: STEP: delete the pod Jun 5 01:07:39.440: INFO: Waiting for pod pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68 to disappear Jun 5 01:07:39.453: INFO: Pod pod-secrets-7d97db36-a19c-488c-a38f-d7214c3f0c68 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:07:39.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9755" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4562,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:07:39.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-4b6e59bf-32f1-4c6d-af22-475e3377e1aa in namespace container-probe-784 Jun 5 01:07:43.572: INFO: Started pod liveness-4b6e59bf-32f1-4c6d-af22-475e3377e1aa in namespace container-probe-784 STEP: checking the pod's current state and verifying that restartCount is present Jun 5 01:07:43.575: INFO: Initial restart count of pod liveness-4b6e59bf-32f1-4c6d-af22-475e3377e1aa is 0 Jun 5 01:08:01.863: INFO: Restart count of pod container-probe-784/liveness-4b6e59bf-32f1-4c6d-af22-475e3377e1aa is now 1 (18.287171352s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:08:01.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-784" for this suite. • [SLOW TEST:22.466 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":283,"skipped":4568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:08:01.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 5 01:08:02.032: INFO: Creating deployment "webserver-deployment" Jun 5 01:08:02.038: INFO: Waiting for observed generation 1 Jun 5 01:08:04.447: INFO: Waiting for all required pods to come up Jun 5 01:08:04.467: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 5 01:08:16.613: INFO: Waiting for deployment "webserver-deployment" to complete Jun 5 01:08:16.618: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 5 01:08:16.627: INFO: Updating deployment webserver-deployment Jun 5 01:08:16.627: INFO: Waiting for observed generation 2 Jun 5 01:08:18.668: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 5 01:08:18.672: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 5 01:08:18.675: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 5 01:08:18.682: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 5 01:08:18.682: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 5 01:08:18.684: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 5 01:08:18.688: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 5 01:08:18.688: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 5 01:08:18.693: INFO: Updating deployment webserver-deployment Jun 5 01:08:18.693: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 5 01:08:19.028: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 5 01:08:19.194: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 5 01:08:19.466: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4077 /apis/apps/v1/namespaces/deployment-4077/deployments/webserver-deployment a8a50fa0-93a5-45ba-8d91-481d07933232 10353664 3 2020-06-05 01:08:02 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-05 01:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004157738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-06-05 01:08:17 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-05 01:08:19 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 5 01:08:19.614: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-4077 /apis/apps/v1/namespaces/deployment-4077/replicasets/webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 10353718 3 2020-06-05 01:08:16 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a8a50fa0-93a5-45ba-8d91-481d07933232 0xc004157cc7 0xc004157cc8}] [] [{kube-controller-manager Update apps/v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8a50fa0-93a5-45ba-8d91-481d07933232\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004157d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 5 01:08:19.614: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 5 01:08:19.614: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-4077 /apis/apps/v1/namespaces/deployment-4077/replicasets/webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 10353719 3 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a8a50fa0-93a5-45ba-8d91-481d07933232 0xc004157dd7 0xc004157dd8}] [] [{kube-controller-manager Update apps/v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8a50fa0-93a5-45ba-8d91-481d07933232\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004157e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 5 01:08:19.720: INFO: Pod "webserver-deployment-6676bcd6d4-2nb26" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2nb26 webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-2nb26 30b95a7b-5404-48bc-b944-dc9cc6fa2d65 10353711 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f0417 0xc0041f0418}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.720: INFO: Pod "webserver-deployment-6676bcd6d4-44jgg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-44jgg webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-44jgg 2c209f81-38d3-4e7a-83f3-03d83a84bd71 10353710 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f0557 0xc0041f0558}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.721: INFO: Pod "webserver-deployment-6676bcd6d4-4kxtd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4kxtd webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-4kxtd 4175a03b-d709-4632-893f-74bf81e62eba 10353722 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f06e7 0xc0041f06e8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.721: INFO: Pod "webserver-deployment-6676bcd6d4-4nbxs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4nbxs webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-4nbxs 981d08cb-c69f-47ce-beb5-f30e897919a4 10353634 0 2020-06-05 01:08:16 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f0857 0xc0041f0858}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-05 01:08:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.721: INFO: Pod "webserver-deployment-6676bcd6d4-6cg6l" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6cg6l webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-6cg6l 834ce15c-ef10-411d-adf9-7f3f6438be4d 10353708 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f0a57 0xc0041f0a58}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.721: INFO: Pod "webserver-deployment-6676bcd6d4-7spbb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7spbb webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-7spbb 783eef1b-78f7-4378-91c7-d0d20823411c 10353691 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f0bc7 0xc0041f0bc8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.721: INFO: Pod "webserver-deployment-6676bcd6d4-88qw7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-88qw7 webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-88qw7 c47f2155-751b-46ac-80bd-9f13f2d50fdd 10353671 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f0d27 0xc0041f0d28}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.721: INFO: Pod "webserver-deployment-6676bcd6d4-92s8b" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-92s8b webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-92s8b 1da1db71-7b2e-4d00-8b6c-b857ac7dfd69 10353646 0 2020-06-05 01:08:16 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f0e67 0xc0041f0e68}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-05 01:08:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.722: INFO: Pod "webserver-deployment-6676bcd6d4-94vvb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-94vvb webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-94vvb afbff866-5f1c-4d59-a38a-cb1b12f547b1 10353625 0 2020-06-05 01:08:16 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f1017 0xc0041f1018}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-05 01:08:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.722: INFO: Pod "webserver-deployment-6676bcd6d4-fnwmn" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fnwmn webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-fnwmn e42033c4-8ddf-4e00-972b-7475958bde9e 10353709 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f1227 0xc0041f1228}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.722: INFO: Pod "webserver-deployment-6676bcd6d4-gjq9n" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gjq9n webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-gjq9n 591c4940-4f59-4a56-8fa9-8b43ab793376 10353622 0 2020-06-05 01:08:16 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f1397 0xc0041f1398}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-05 01:08:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.722: INFO: Pod "webserver-deployment-6676bcd6d4-mhzxd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mhzxd webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-mhzxd d58a300c-5242-4994-8ee8-bc30b01e92ef 10353644 0 2020-06-05 01:08:16 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f1597 0xc0041f1598}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-05 01:08:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.722: INFO: Pod "webserver-deployment-6676bcd6d4-ss5sv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ss5sv webserver-deployment-6676bcd6d4- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-6676bcd6d4-ss5sv dc1347fc-c3c5-462e-8145-9e6677a3fcc1 10353685 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 80a474cc-3c31-466c-a2a1-4388bdb32793 0xc0041f1757 0xc0041f1758}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"80a474cc-3c31-466c-a2a1-4388bdb32793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.723: INFO: Pod "webserver-deployment-84855cf797-286hc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-286hc webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-286hc 637ceec7-974c-4bb0-b01d-bcbce7b27f2a 10353700 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc0041f18c7 0xc0041f18c8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.723: INFO: Pod "webserver-deployment-84855cf797-2nq5r" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2nq5r webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-2nq5r 40d7db0e-688d-43f6-b013-e2ba98ebf0c8 10353697 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc0041f1a27 0xc0041f1a28}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.723: INFO: Pod "webserver-deployment-84855cf797-6w4f8" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6w4f8 webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-6w4f8 d65fc13d-e518-4128-9892-09170d192252 10353549 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc0041f1b67 0xc0041f1b68}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.172\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.172,StartTime:2020-06-05 01:08:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0f18198c3af06be6c4e7c0c0b34f5544b2197f7411775d1bdc53079309300106,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.723: INFO: Pod "webserver-deployment-84855cf797-7bdnv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7bdnv webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-7bdnv b8804a69-877d-431f-a5d4-25d074e0fb19 10353693 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc0041f1d17 0xc0041f1d18}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.723: INFO: Pod "webserver-deployment-84855cf797-8f4fg" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8f4fg webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-8f4fg a613b420-7d1b-4ffa-8bbe-6924819655fa 10353729 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc0041f1e47 0xc0041f1e48}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-05 01:08:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.723: INFO: Pod "webserver-deployment-84855cf797-cdhml" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cdhml webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-cdhml 1b6468d1-63b2-44f3-ad39-34176708e334 10353676 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc0041f1fd7 0xc0041f1fd8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.724: INFO: Pod "webserver-deployment-84855cf797-cj5b2" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cj5b2 webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-cj5b2 5e6b927c-db39-46d5-adbd-37b0c792842a 10353568 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421c107 0xc00421c108}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.17,StartTime:2020-06-05 01:08:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fa2593c6aad558d1281722f721375b772002eef24d0e0eed7ec3b26161992205,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.724: INFO: Pod "webserver-deployment-84855cf797-ctn5w" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ctn5w webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-ctn5w 30575448-f850-4b7e-87ca-9a89af387dda 10353536 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421c2b7 0xc00421c2b8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.15,StartTime:2020-06-05 01:08:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e12ef2f1cb953ba321474549de4554c3956e637f9114d9fad390478da13e6546,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.724: INFO: Pod "webserver-deployment-84855cf797-dbndq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dbndq webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-dbndq 6e848c34-2db6-4f7c-98c3-3200eb22de0f 10353688 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421c467 0xc00421c468}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.724: INFO: Pod "webserver-deployment-84855cf797-ddjv6" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ddjv6 webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-ddjv6 ef4865f7-2b70-41da-aee8-22bdc4419ff9 10353675 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421c5d7 0xc00421c5d8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.724: INFO: Pod "webserver-deployment-84855cf797-k5hw2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-k5hw2 webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-k5hw2 6eaeb341-bb76-4997-90e0-1fce0c5e4f17 10353696 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421c717 0xc00421c718}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.724: INFO: Pod "webserver-deployment-84855cf797-kq79q" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kq79q webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-kq79q 263b3501-0131-4d74-b369-b05bafc11375 10353524 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421c847 0xc00421c848}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.14,StartTime:2020-06-05 01:08:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f548230f337181cf1b3d5955af3158d7851344bfa4009ca6b14ce2cf3120c52d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.724: INFO: Pod "webserver-deployment-84855cf797-m4w2p" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m4w2p webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-m4w2p 2bc1022b-db03-48b0-b647-625f11b703c1 10353578 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421ca67 0xc00421ca68}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.176\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.176,StartTime:2020-06-05 01:08:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2b2a578f4b0a85f28e3437e86b248074431319a3062375c7349ae3362e694974,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.725: INFO: Pod "webserver-deployment-84855cf797-ngczz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ngczz webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-ngczz a661a087-6c95-41e2-98f4-6a632c7d71d1 10353698 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421cc77 0xc00421cc78}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.725: INFO: Pod "webserver-deployment-84855cf797-njp9s" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-njp9s webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-njp9s e896ad2d-310f-471b-997e-88f2b82e193e 10353540 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421cdb7 0xc00421cdb8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.173\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.173,StartTime:2020-06-05 01:08:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ed39de2870fe088bf51a85af136f3adf698b60e3bc52e1230f0f159f923f1d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.725: INFO: Pod "webserver-deployment-84855cf797-st5ml" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-st5ml webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-st5ml 8ff493db-bcbf-4d02-8393-a4145ff40137 10353573 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421cf77 0xc00421cf78}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.174\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.174,StartTime:2020-06-05 01:08:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://70b5ffb1e354a8001382c9e4e20cee623c954d6d98f13c71aeaf60709725d175,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.725: INFO: Pod "webserver-deployment-84855cf797-tw4n5" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tw4n5 webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-tw4n5 cfe9ec71-ad9b-4ff8-8109-6a8752b4a859 10353581 0 2020-06-05 01:08:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421d187 0xc00421d188}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.175\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.175,StartTime:2020-06-05 01:08:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-05 01:08:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fa35a523b13b5b831015dbe557eb1f16b14537e3747e6582dade9f188a356004,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.725: INFO: Pod "webserver-deployment-84855cf797-vtbkq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vtbkq webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-vtbkq 6d97e248-c7c2-4568-8943-beb9071738bc 10353701 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421d357 0xc00421d358}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.726: INFO: Pod "webserver-deployment-84855cf797-xfpck" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xfpck webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-xfpck 7c612157-38b5-499f-9ba8-72cb0d05e554 10353692 0 2020-06-05 01:08:19 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421d4c7 0xc00421d4c8}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 5 01:08:19.726: INFO: Pod "webserver-deployment-84855cf797-xsctm" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xsctm webserver-deployment-84855cf797- deployment-4077 /api/v1/namespaces/deployment-4077/pods/webserver-deployment-84855cf797-xsctm f797a743-96a1-406f-bd29-0173408f4753 10353720 0 2020-06-05 01:08:18 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 f14aa6c7-5efd-4fd9-9d72-4058dc02982b 0xc00421d627 0xc00421d628}] [] [{kube-controller-manager Update v1 2020-06-05 01:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f14aa6c7-5efd-4fd9-9d72-4058dc02982b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-05 01:08:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b2ql4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b2ql4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b2ql4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-05 01:08:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-05 01:08:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:08:19.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4077" for this suite. • [SLOW TEST:17.928 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":284,"skipped":4641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:08:19.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-bde09cbf-d6fa-4922-9782-1e5b67aa335a STEP: Creating a pod to test consume configMaps Jun 5 01:08:20.146: INFO: Waiting up to 5m0s for pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d" in namespace "configmap-8739" to be "Succeeded or Failed" Jun 5 01:08:20.206: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 60.011334ms Jun 5 01:08:22.272: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126075941s Jun 5 01:08:24.699: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553179602s Jun 5 01:08:27.235: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.088658336s Jun 5 01:08:30.322: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.175924076s Jun 5 01:08:32.458: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.311597257s Jun 5 01:08:34.536: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.390042811s Jun 5 01:08:36.608: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.461665502s Jun 5 01:08:38.649: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Running", Reason="", readiness=true. Elapsed: 18.502780561s Jun 5 01:08:40.659: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Running", Reason="", readiness=true. Elapsed: 20.513145657s Jun 5 01:08:42.663: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.516829231s STEP: Saw pod success Jun 5 01:08:42.663: INFO: Pod "pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d" satisfied condition "Succeeded or Failed" Jun 5 01:08:42.666: INFO: Trying to get logs from node latest-worker pod pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d container configmap-volume-test: STEP: delete the pod Jun 5 01:08:42.806: INFO: Waiting for pod pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d to disappear Jun 5 01:08:42.821: INFO: Pod pod-configmaps-43153a93-b881-4f3e-a505-ccdb7045044d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:08:42.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8739" for this suite. • [SLOW TEST:23.006 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":285,"skipped":4708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:08:42.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-6122/configmap-test-0f5fc524-4acc-4012-b5ee-937e4cd0748e STEP: Creating a pod to test consume configMaps Jun 5 01:08:43.032: INFO: Waiting up to 5m0s for pod "pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0" in namespace "configmap-6122" to be "Succeeded or Failed" Jun 5 01:08:43.065: INFO: Pod "pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.726065ms Jun 5 01:08:45.069: INFO: Pod "pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037240452s Jun 5 01:08:47.073: INFO: Pod "pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040858899s STEP: Saw pod success Jun 5 01:08:47.073: INFO: Pod "pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0" satisfied condition "Succeeded or Failed" Jun 5 01:08:47.075: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0 container env-test: STEP: delete the pod Jun 5 01:08:47.121: INFO: Waiting for pod pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0 to disappear Jun 5 01:08:47.134: INFO: Pod pod-configmaps-47aac59d-6389-4221-8adf-d3461f9f85a0 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:08:47.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6122" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":286,"skipped":4740,"failed":0} ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:08:47.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 5 01:08:52.053: INFO: Successfully updated pod "labelsupdate7279f769-ab60-48c8-9086-b8305d662228" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:08:54.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2652" for this suite. • [SLOW TEST:6.986 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 5 01:08:54.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 5 01:08:58.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-926" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 5 01:08:58.297: INFO: Running AfterSuite actions on all nodes Jun 5 01:08:58.297: INFO: Running AfterSuite actions on node 1 Jun 5 01:08:58.297: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5410.680 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS