I0514 23:39:30.212644 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0514 23:39:30.212942 7 e2e.go:129] Starting e2e run "d3ecd145-5de8-4bc3-91a4-a686f614c9c3" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589499569 - Will randomize all specs Will run 288 of 5095 specs May 14 23:39:30.269: INFO: >>> kubeConfig: /root/.kube/config May 14 23:39:30.272: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 14 23:39:30.288: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 14 23:39:30.329: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 14 23:39:30.329: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 14 23:39:30.329: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 14 23:39:30.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 14 23:39:30.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 14 23:39:30.339: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 14 23:39:30.340: INFO: kube-apiserver version: v1.18.2 May 14 23:39:30.340: INFO: >>> kubeConfig: /root/.kube/config May 14 23:39:30.348: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:39:30.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 14 23:39:30.405: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 14 23:39:30.413: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a8a6db2e-2ef1-46cd-b09c-8c669b8677ff" in namespace "security-context-test-2742" to be "Succeeded or Failed" May 14 23:39:30.426: INFO: Pod "busybox-user-65534-a8a6db2e-2ef1-46cd-b09c-8c669b8677ff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.760698ms May 14 23:39:32.429: INFO: Pod "busybox-user-65534-a8a6db2e-2ef1-46cd-b09c-8c669b8677ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016063578s May 14 23:39:34.433: INFO: Pod "busybox-user-65534-a8a6db2e-2ef1-46cd-b09c-8c669b8677ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019531001s May 14 23:39:36.438: INFO: Pod "busybox-user-65534-a8a6db2e-2ef1-46cd-b09c-8c669b8677ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024175565s May 14 23:39:36.438: INFO: Pod "busybox-user-65534-a8a6db2e-2ef1-46cd-b09c-8c669b8677ff" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:39:36.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2742" for this suite. • [SLOW TEST:6.099 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":1,"skipped":43,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:39:36.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7542 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7542 I0514 23:39:36.605103 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7542, replica count: 2 I0514 23:39:39.655687 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 23:39:42.655943 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 23:39:42.656: INFO: Creating new exec pod May 14 23:39:47.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7542 execpod4559g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 14 23:39:50.230: INFO: stderr: "I0514 23:39:50.079469 31 log.go:172] (0xc00003bad0) (0xc0005d6960) Create stream\nI0514 23:39:50.079533 31 log.go:172] (0xc00003bad0) (0xc0005d6960) Stream added, broadcasting: 1\nI0514 23:39:50.082091 31 log.go:172] (0xc00003bad0) Reply frame received for 1\nI0514 23:39:50.082130 31 log.go:172] (0xc00003bad0) (0xc0005c2be0) Create stream\nI0514 23:39:50.082140 31 log.go:172] (0xc00003bad0) (0xc0005c2be0) Stream added, broadcasting: 3\nI0514 23:39:50.083030 31 log.go:172] (0xc00003bad0) Reply frame received for 3\nI0514 23:39:50.083058 31 log.go:172] (0xc00003bad0) (0xc0005bc460) Create stream\nI0514 23:39:50.083065 31 log.go:172] (0xc00003bad0) (0xc0005bc460) Stream added, broadcasting: 5\nI0514 23:39:50.083822 31 log.go:172] (0xc00003bad0) Reply frame received for 5\nI0514 23:39:50.194500 31 log.go:172] (0xc00003bad0) Data frame received for 5\nI0514 23:39:50.194521 31 log.go:172] (0xc0005bc460) (5) Data frame handling\nI0514 23:39:50.194532 31 log.go:172] (0xc0005bc460) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0514 23:39:50.222027 31 log.go:172] (0xc00003bad0) Data frame received for 5\nI0514 23:39:50.222060 31 log.go:172] (0xc0005bc460) (5) Data frame handling\nI0514 23:39:50.222087 31 log.go:172] (0xc0005bc460) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0514 23:39:50.222269 31 log.go:172] (0xc00003bad0) Data frame received for 3\nI0514 23:39:50.222307 31 log.go:172] (0xc0005c2be0) (3) Data frame handling\nI0514 23:39:50.222328 31 log.go:172] (0xc00003bad0) Data frame received for 5\nI0514 23:39:50.222340 31 log.go:172] (0xc0005bc460) (5) Data frame handling\nI0514 23:39:50.224183 31 log.go:172] (0xc00003bad0) Data frame received for 1\nI0514 23:39:50.224216 31 log.go:172] (0xc0005d6960) (1) Data frame handling\nI0514 23:39:50.224230 31 log.go:172] (0xc0005d6960) (1) Data frame sent\nI0514 23:39:50.224362 31 log.go:172] (0xc00003bad0) (0xc0005d6960) Stream removed, broadcasting: 1\nI0514 23:39:50.224402 31 log.go:172] (0xc00003bad0) Go away received\nI0514 23:39:50.224713 31 log.go:172] (0xc00003bad0) (0xc0005d6960) Stream removed, broadcasting: 1\nI0514 23:39:50.224737 31 log.go:172] (0xc00003bad0) (0xc0005c2be0) Stream removed, broadcasting: 3\nI0514 23:39:50.224751 31 log.go:172] (0xc00003bad0) (0xc0005bc460) Stream removed, broadcasting: 5\n" May 14 23:39:50.230: INFO: stdout: "" May 14 23:39:50.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7542 execpod4559g -- /bin/sh -x -c nc -zv -t -w 2 10.109.74.107 80' May 14 23:39:50.460: INFO: stderr: "I0514 23:39:50.367762 60 log.go:172] (0xc00003a0b0) (0xc000612e60) Create stream\nI0514 23:39:50.367821 60 log.go:172] (0xc00003a0b0) (0xc000612e60) Stream added, broadcasting: 1\nI0514 23:39:50.369899 60 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0514 23:39:50.369954 60 log.go:172] (0xc00003a0b0) (0xc000372000) Create stream\nI0514 23:39:50.369970 60 log.go:172] (0xc00003a0b0) (0xc000372000) Stream added, broadcasting: 3\nI0514 23:39:50.370906 60 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0514 23:39:50.370939 60 log.go:172] (0xc00003a0b0) (0xc000420fa0) Create stream\nI0514 23:39:50.370952 60 log.go:172] (0xc00003a0b0) (0xc000420fa0) Stream added, broadcasting: 5\nI0514 23:39:50.371833 60 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0514 23:39:50.451689 60 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0514 23:39:50.451775 60 log.go:172] (0xc000420fa0) (5) Data frame handling\nI0514 23:39:50.451797 60 log.go:172] (0xc000420fa0) (5) Data frame sent\nI0514 23:39:50.451812 60 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0514 23:39:50.451823 60 log.go:172] (0xc000420fa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.74.107 80\nConnection to 10.109.74.107 80 port [tcp/http] succeeded!\nI0514 23:39:50.451851 60 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0514 23:39:50.451879 60 log.go:172] (0xc000372000) (3) Data frame handling\nI0514 23:39:50.453550 60 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0514 23:39:50.453583 60 log.go:172] (0xc000612e60) (1) Data frame handling\nI0514 23:39:50.453598 60 log.go:172] (0xc000612e60) (1) Data frame sent\nI0514 23:39:50.453608 60 log.go:172] (0xc00003a0b0) (0xc000612e60) Stream removed, broadcasting: 1\nI0514 23:39:50.453624 60 log.go:172] (0xc00003a0b0) Go away received\nI0514 23:39:50.454102 60 log.go:172] (0xc00003a0b0) (0xc000612e60) Stream removed, broadcasting: 1\nI0514 23:39:50.454141 60 log.go:172] (0xc00003a0b0) (0xc000372000) Stream removed, broadcasting: 3\nI0514 23:39:50.454155 60 log.go:172] (0xc00003a0b0) (0xc000420fa0) Stream removed, broadcasting: 5\n" May 14 23:39:50.460: INFO: stdout: "" May 14 23:39:50.460: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:39:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7542" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.066 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":2,"skipped":57,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:39:50.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-8fb5670d-2cf0-42ab-939c-368941bd16fd STEP: Creating a pod to test consume secrets May 14 23:39:50.593: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44" in namespace "projected-5497" to be "Succeeded or Failed" May 14 23:39:50.598: INFO: Pod "pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.535185ms May 14 23:39:52.601: INFO: Pod "pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00796338s May 14 23:39:54.606: INFO: Pod "pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012994634s STEP: Saw pod success May 14 23:39:54.607: INFO: Pod "pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44" satisfied condition "Succeeded or Failed" May 14 23:39:54.610: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44 container projected-secret-volume-test: STEP: delete the pod May 14 23:39:54.654: INFO: Waiting for pod pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44 to disappear May 14 23:39:54.664: INFO: Pod pod-projected-secrets-88d50eb4-f8e9-448f-a47e-acc66974af44 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:39:54.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5497" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":3,"skipped":58,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:39:54.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 14 23:40:01.275: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4896af2c-5026-4f9c-8ed8-1d857e2d7bb2" May 14 23:40:01.275: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4896af2c-5026-4f9c-8ed8-1d857e2d7bb2" in namespace "pods-716" to be "terminated due to deadline exceeded" May 14 23:40:01.378: INFO: Pod "pod-update-activedeadlineseconds-4896af2c-5026-4f9c-8ed8-1d857e2d7bb2": Phase="Running", Reason="", readiness=true. Elapsed: 102.955495ms May 14 23:40:03.382: INFO: Pod "pod-update-activedeadlineseconds-4896af2c-5026-4f9c-8ed8-1d857e2d7bb2": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.10716327s May 14 23:40:03.382: INFO: Pod "pod-update-activedeadlineseconds-4896af2c-5026-4f9c-8ed8-1d857e2d7bb2" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:40:03.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-716" for this suite. • [SLOW TEST:8.723 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":71,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:40:03.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-33e20ddd-6605-4a9d-99f3-c774ff92d511 STEP: Creating a pod to test consume secrets May 14 23:40:03.559: INFO: Waiting up to 5m0s for pod "pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff" in namespace "secrets-2918" to be "Succeeded or Failed" May 14 23:40:03.593: INFO: Pod "pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff": Phase="Pending", Reason="", readiness=false. Elapsed: 34.082259ms May 14 23:40:05.597: INFO: Pod "pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037659818s May 14 23:40:07.601: INFO: Pod "pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041780167s STEP: Saw pod success May 14 23:40:07.601: INFO: Pod "pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff" satisfied condition "Succeeded or Failed" May 14 23:40:07.604: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff container secret-volume-test: STEP: delete the pod May 14 23:40:07.624: INFO: Waiting for pod pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff to disappear May 14 23:40:07.645: INFO: Pod pod-secrets-b6aa0ede-9ff5-4ea7-a15d-1d86345516ff no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:40:07.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2918" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":5,"skipped":76,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:40:07.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-d1de84e3-de3f-4c1e-93ed-7d81cc433395 STEP: Creating a pod to test consume configMaps May 14 23:40:07.768: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261" in namespace "projected-1946" to be "Succeeded or Failed" May 14 23:40:07.791: INFO: Pod "pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261": Phase="Pending", Reason="", readiness=false. Elapsed: 22.562695ms May 14 23:40:09.802: INFO: Pod "pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033965296s May 14 23:40:11.806: INFO: Pod "pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037366859s STEP: Saw pod success May 14 23:40:11.806: INFO: Pod "pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261" satisfied condition "Succeeded or Failed" May 14 23:40:11.809: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261 container projected-configmap-volume-test: STEP: delete the pod May 14 23:40:11.935: INFO: Waiting for pod pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261 to disappear May 14 23:40:11.939: INFO: Pod pod-projected-configmaps-11a21ea2-bc8a-4c57-b7f7-0719c944a261 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:40:11.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1946" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":6,"skipped":90,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:40:11.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8 STEP: updating the pod May 14 23:40:18.628: INFO: Successfully updated pod "var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8" STEP: waiting for pod and container restart STEP: Failing liveness probe May 14 23:40:18.646: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-4507 PodName:var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 23:40:18.646: INFO: >>> kubeConfig: /root/.kube/config I0514 23:40:18.700382 7 log.go:172] (0xc00133c6e0) (0xc002053360) Create stream I0514 23:40:18.700420 7 log.go:172] (0xc00133c6e0) (0xc002053360) Stream added, broadcasting: 1 I0514 23:40:18.703280 7 log.go:172] (0xc00133c6e0) Reply frame received for 1 I0514 23:40:18.703313 7 log.go:172] (0xc00133c6e0) (0xc001fe4e60) Create stream I0514 23:40:18.703321 7 log.go:172] (0xc00133c6e0) (0xc001fe4e60) Stream added, broadcasting: 3 I0514 23:40:18.708318 7 log.go:172] (0xc00133c6e0) Reply frame received for 3 I0514 23:40:18.708338 7 log.go:172] (0xc00133c6e0) (0xc002053400) Create stream I0514 23:40:18.708346 7 log.go:172] (0xc00133c6e0) (0xc002053400) Stream added, broadcasting: 5 I0514 23:40:18.708870 7 log.go:172] (0xc00133c6e0) Reply frame received for 5 I0514 23:40:18.763569 7 log.go:172] (0xc00133c6e0) Data frame received for 5 I0514 23:40:18.763594 7 log.go:172] (0xc002053400) (5) Data frame handling I0514 23:40:18.763641 7 log.go:172] (0xc00133c6e0) Data frame received for 3 I0514 23:40:18.763673 7 log.go:172] (0xc001fe4e60) (3) Data frame handling I0514 23:40:18.764899 7 log.go:172] (0xc00133c6e0) Data frame received for 1 I0514 23:40:18.764924 7 log.go:172] (0xc002053360) (1) Data frame handling I0514 23:40:18.764944 7 log.go:172] (0xc002053360) (1) Data frame sent I0514 23:40:18.764958 7 log.go:172] (0xc00133c6e0) (0xc002053360) Stream removed, broadcasting: 1 I0514 23:40:18.765021 7 log.go:172] (0xc00133c6e0) Go away received I0514 23:40:18.765451 7 log.go:172] (0xc00133c6e0) (0xc002053360) Stream removed, broadcasting: 1 I0514 23:40:18.765466 7 log.go:172] (0xc00133c6e0) (0xc001fe4e60) Stream removed, broadcasting: 3 I0514 23:40:18.765474 7 log.go:172] (0xc00133c6e0) (0xc002053400) Stream removed, broadcasting: 5 May 14 23:40:18.765: INFO: Pod exec output: / STEP: Waiting for container to restart May 14 23:40:18.768: INFO: Container dapi-container, restarts: 0 May 14 23:40:28.776: INFO: Container dapi-container, restarts: 0 May 14 23:40:38.774: INFO: Container dapi-container, restarts: 0 May 14 23:40:48.772: INFO: Container dapi-container, restarts: 0 May 14 23:40:58.772: INFO: Container dapi-container, restarts: 1 May 14 23:40:58.773: INFO: Container has restart count: 1 STEP: Rewriting the file May 14 23:40:58.773: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-4507 PodName:var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 23:40:58.773: INFO: >>> kubeConfig: /root/.kube/config I0514 23:40:58.803855 7 log.go:172] (0xc0025322c0) (0xc001cb14a0) Create stream I0514 23:40:58.803883 7 log.go:172] (0xc0025322c0) (0xc001cb14a0) Stream added, broadcasting: 1 I0514 23:40:58.805999 7 log.go:172] (0xc0025322c0) Reply frame received for 1 I0514 23:40:58.806031 7 log.go:172] (0xc0025322c0) (0xc001fe4f00) Create stream I0514 23:40:58.806042 7 log.go:172] (0xc0025322c0) (0xc001fe4f00) Stream added, broadcasting: 3 I0514 23:40:58.806888 7 log.go:172] (0xc0025322c0) Reply frame received for 3 I0514 23:40:58.806912 7 log.go:172] (0xc0025322c0) (0xc001cb1540) Create stream I0514 23:40:58.806925 7 log.go:172] (0xc0025322c0) (0xc001cb1540) Stream added, broadcasting: 5 I0514 23:40:58.807727 7 log.go:172] (0xc0025322c0) Reply frame received for 5 I0514 23:40:58.887929 7 log.go:172] (0xc0025322c0) Data frame received for 3 I0514 23:40:58.887960 7 log.go:172] (0xc001fe4f00) (3) Data frame handling I0514 23:40:58.887980 7 log.go:172] (0xc0025322c0) Data frame received for 5 I0514 23:40:58.887990 7 log.go:172] (0xc001cb1540) (5) Data frame handling I0514 23:40:58.889751 7 log.go:172] (0xc0025322c0) Data frame received for 1 I0514 23:40:58.889775 7 log.go:172] (0xc001cb14a0) (1) Data frame handling I0514 23:40:58.889796 7 log.go:172] (0xc001cb14a0) (1) Data frame sent I0514 23:40:58.889849 7 log.go:172] (0xc0025322c0) (0xc001cb14a0) Stream removed, broadcasting: 1 I0514 23:40:58.889870 7 log.go:172] (0xc0025322c0) Go away received I0514 23:40:58.889910 7 log.go:172] (0xc0025322c0) (0xc001cb14a0) Stream removed, broadcasting: 1 I0514 23:40:58.889927 7 log.go:172] (0xc0025322c0) (0xc001fe4f00) Stream removed, broadcasting: 3 I0514 23:40:58.889933 7 log.go:172] (0xc0025322c0) (0xc001cb1540) Stream removed, broadcasting: 5 May 14 23:40:58.889: INFO: Exec stderr: "" May 14 23:40:58.889: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 14 23:41:28.897: INFO: Container has restart count: 2 May 14 23:42:30.914: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 14 23:42:30.918: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-4507 PodName:var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 23:42:30.918: INFO: >>> kubeConfig: /root/.kube/config I0514 23:42:30.945064 7 log.go:172] (0xc002e2f6b0) (0xc0021ca640) Create stream I0514 23:42:30.945090 7 log.go:172] (0xc002e2f6b0) (0xc0021ca640) Stream added, broadcasting: 1 I0514 23:42:30.946827 7 log.go:172] (0xc002e2f6b0) Reply frame received for 1 I0514 23:42:30.946856 7 log.go:172] (0xc002e2f6b0) (0xc0021ca6e0) Create stream I0514 23:42:30.946863 7 log.go:172] (0xc002e2f6b0) (0xc0021ca6e0) Stream added, broadcasting: 3 I0514 23:42:30.947715 7 log.go:172] (0xc002e2f6b0) Reply frame received for 3 I0514 23:42:30.947753 7 log.go:172] (0xc002e2f6b0) (0xc002208280) Create stream I0514 23:42:30.947765 7 log.go:172] (0xc002e2f6b0) (0xc002208280) Stream added, broadcasting: 5 I0514 23:42:30.948728 7 log.go:172] (0xc002e2f6b0) Reply frame received for 5 I0514 23:42:31.006189 7 log.go:172] (0xc002e2f6b0) Data frame received for 5 I0514 23:42:31.006233 7 log.go:172] (0xc002208280) (5) Data frame handling I0514 23:42:31.006263 7 log.go:172] (0xc002e2f6b0) Data frame received for 3 I0514 23:42:31.006277 7 log.go:172] (0xc0021ca6e0) (3) Data frame handling I0514 23:42:31.008491 7 log.go:172] (0xc002e2f6b0) Data frame received for 1 I0514 23:42:31.008522 7 log.go:172] (0xc0021ca640) (1) Data frame handling I0514 23:42:31.008550 7 log.go:172] (0xc0021ca640) (1) Data frame sent I0514 23:42:31.008579 7 log.go:172] (0xc002e2f6b0) (0xc0021ca640) Stream removed, broadcasting: 1 I0514 23:42:31.008667 7 log.go:172] (0xc002e2f6b0) Go away received I0514 23:42:31.008955 7 log.go:172] (0xc002e2f6b0) (0xc0021ca640) Stream removed, broadcasting: 1 I0514 23:42:31.009004 7 log.go:172] (0xc002e2f6b0) (0xc0021ca6e0) Stream removed, broadcasting: 3 I0514 23:42:31.009096 7 log.go:172] (0xc002e2f6b0) (0xc002208280) Stream removed, broadcasting: 5 May 14 23:42:31.013: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-4507 PodName:var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 23:42:31.013: INFO: >>> kubeConfig: /root/.kube/config I0514 23:42:31.042302 7 log.go:172] (0xc002e2fd90) (0xc0021caf00) Create stream I0514 23:42:31.042335 7 log.go:172] (0xc002e2fd90) (0xc0021caf00) Stream added, broadcasting: 1 I0514 23:42:31.044200 7 log.go:172] (0xc002e2fd90) Reply frame received for 1 I0514 23:42:31.044230 7 log.go:172] (0xc002e2fd90) (0xc0021cafa0) Create stream I0514 23:42:31.044240 7 log.go:172] (0xc002e2fd90) (0xc0021cafa0) Stream added, broadcasting: 3 I0514 23:42:31.044920 7 log.go:172] (0xc002e2fd90) Reply frame received for 3 I0514 23:42:31.044962 7 log.go:172] (0xc002e2fd90) (0xc0021cb040) Create stream I0514 23:42:31.044978 7 log.go:172] (0xc002e2fd90) (0xc0021cb040) Stream added, broadcasting: 5 I0514 23:42:31.045930 7 log.go:172] (0xc002e2fd90) Reply frame received for 5 I0514 23:42:31.115741 7 log.go:172] (0xc002e2fd90) Data frame received for 3 I0514 23:42:31.115778 7 log.go:172] (0xc0021cafa0) (3) Data frame handling I0514 23:42:31.115841 7 log.go:172] (0xc002e2fd90) Data frame received for 5 I0514 23:42:31.116060 7 log.go:172] (0xc0021cb040) (5) Data frame handling I0514 23:42:31.117373 7 log.go:172] (0xc002e2fd90) Data frame received for 1 I0514 23:42:31.117400 7 log.go:172] (0xc0021caf00) (1) Data frame handling I0514 23:42:31.117410 7 log.go:172] (0xc0021caf00) (1) Data frame sent I0514 23:42:31.117421 7 log.go:172] (0xc002e2fd90) (0xc0021caf00) Stream removed, broadcasting: 1 I0514 23:42:31.117472 7 log.go:172] (0xc002e2fd90) (0xc0021caf00) Stream removed, broadcasting: 1 I0514 23:42:31.117481 7 log.go:172] (0xc002e2fd90) (0xc0021cafa0) Stream removed, broadcasting: 3 I0514 23:42:31.117727 7 log.go:172] (0xc002e2fd90) Go away received I0514 23:42:31.117782 7 log.go:172] (0xc002e2fd90) (0xc0021cb040) Stream removed, broadcasting: 5 May 14 23:42:31.117: INFO: Deleting pod "var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8" in namespace "var-expansion-4507" May 14 23:42:31.124: INFO: Wait up to 5m0s for pod "var-expansion-6c8d4602-64b8-45f7-894e-55274e8efee8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:43:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4507" for this suite. • [SLOW TEST:183.213 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":7,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:43:15.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-22a971a4-f183-4691-9e0b-bf0a41248c59 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-22a971a4-f183-4691-9e0b-bf0a41248c59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:44:45.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3370" for this suite. • [SLOW TEST:90.710 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":116,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:44:45.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2047 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 14 23:44:46.163: INFO: Found 0 stateful pods, waiting for 3 May 14 23:44:56.168: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 23:44:56.168: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 23:44:56.168: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 23:45:06.167: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 23:45:06.167: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 23:45:06.167: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 14 23:45:06.195: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 14 23:45:16.253: INFO: Updating stateful set ss2 May 14 23:45:16.331: INFO: Waiting for Pod statefulset-2047/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 14 23:45:27.047: INFO: Found 2 stateful pods, waiting for 3 May 14 23:45:37.052: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 23:45:37.052: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 23:45:37.052: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 14 23:45:37.076: INFO: Updating stateful set ss2 May 14 23:45:37.109: INFO: Waiting for Pod statefulset-2047/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 14 23:45:47.137: INFO: Updating stateful set ss2 May 14 23:45:47.172: INFO: Waiting for StatefulSet statefulset-2047/ss2 to complete update May 14 23:45:47.172: INFO: Waiting for Pod statefulset-2047/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 14 23:45:57.182: INFO: Deleting all statefulset in ns statefulset-2047 May 14 23:45:57.185: INFO: Scaling statefulset ss2 to 0 May 14 23:46:17.202: INFO: Waiting for statefulset status.replicas updated to 0 May 14 23:46:17.206: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:46:17.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2047" for this suite. • [SLOW TEST:91.411 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":9,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:46:17.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 14 23:48:17.928: INFO: Successfully updated pod "var-expansion-f07e29ad-a66c-4d3b-9492-287b57f3e6f3" STEP: waiting for pod running STEP: deleting the pod gracefully May 14 23:48:19.953: INFO: Deleting pod "var-expansion-f07e29ad-a66c-4d3b-9492-287b57f3e6f3" in namespace "var-expansion-775" May 14 23:48:19.959: INFO: Wait up to 5m0s for pod "var-expansion-f07e29ad-a66c-4d3b-9492-287b57f3e6f3" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:48:54.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-775" for this suite. • [SLOW TEST:156.744 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":10,"skipped":148,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:48:54.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 14 23:48:54.182: INFO: Waiting up to 5m0s for pod "downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f" in namespace "downward-api-3022" to be "Succeeded or Failed" May 14 23:48:54.253: INFO: Pod "downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 70.703849ms May 14 23:48:56.256: INFO: Pod "downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073818541s May 14 23:48:58.259: INFO: Pod "downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077400901s STEP: Saw pod success May 14 23:48:58.259: INFO: Pod "downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f" satisfied condition "Succeeded or Failed" May 14 23:48:58.262: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f container client-container: STEP: delete the pod May 14 23:48:58.434: INFO: Waiting for pod downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f to disappear May 14 23:48:58.499: INFO: Pod downwardapi-volume-336809a2-2488-4f31-9cb4-071cde2a9a8f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:48:58.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3022" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":11,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:48:58.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 14 23:49:03.148: INFO: Successfully updated pod "annotationupdatef51977aa-8800-4bd0-9283-8102f636ed89" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:49:07.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1823" for this suite. • [SLOW TEST:8.680 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":207,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:49:07.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 14 23:49:07.316: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 14 23:49:07.334: INFO: Number of nodes with available pods: 0 May 14 23:49:07.334: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 14 23:49:07.433: INFO: Number of nodes with available pods: 0 May 14 23:49:07.433: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:08.437: INFO: Number of nodes with available pods: 0 May 14 23:49:08.437: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:09.437: INFO: Number of nodes with available pods: 0 May 14 23:49:09.437: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:10.438: INFO: Number of nodes with available pods: 0 May 14 23:49:10.438: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:11.436: INFO: Number of nodes with available pods: 1 May 14 23:49:11.436: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 14 23:49:11.475: INFO: Number of nodes with available pods: 1 May 14 23:49:11.476: INFO: Number of running nodes: 0, number of available pods: 1 May 14 23:49:12.479: INFO: Number of nodes with available pods: 0 May 14 23:49:12.479: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 14 23:49:12.516: INFO: Number of nodes with available pods: 0 May 14 23:49:12.516: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:13.520: INFO: Number of nodes with available pods: 0 May 14 23:49:13.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:14.519: INFO: Number of nodes with available pods: 0 May 14 23:49:14.519: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:15.519: INFO: Number of nodes with available pods: 0 May 14 23:49:15.519: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:16.519: INFO: Number of nodes with available pods: 0 May 14 23:49:16.519: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:17.518: INFO: Number of nodes with available pods: 0 May 14 23:49:17.518: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:18.520: INFO: Number of nodes with available pods: 0 May 14 23:49:18.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:19.520: INFO: Number of nodes with available pods: 0 May 14 23:49:19.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:20.520: INFO: Number of nodes with available pods: 0 May 14 23:49:20.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:21.520: INFO: Number of nodes with available pods: 0 May 14 23:49:21.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:22.521: INFO: Number of nodes with available pods: 0 May 14 23:49:22.521: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:23.521: INFO: Number of nodes with available pods: 0 May 14 23:49:23.521: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:24.520: INFO: Number of nodes with available pods: 0 May 14 23:49:24.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:25.520: INFO: Number of nodes with available pods: 0 May 14 23:49:25.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:26.520: INFO: Number of nodes with available pods: 0 May 14 23:49:26.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:27.630: INFO: Number of nodes with available pods: 0 May 14 23:49:27.630: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:28.520: INFO: Number of nodes with available pods: 0 May 14 23:49:28.520: INFO: Node latest-worker2 is running more than one daemon pod May 14 23:49:29.520: INFO: Number of nodes with available pods: 1 May 14 23:49:29.520: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8835, will wait for the garbage collector to delete the pods May 14 23:49:29.583: INFO: Deleting DaemonSet.extensions daemon-set took: 6.338367ms May 14 23:49:29.883: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.209109ms May 14 23:49:34.109: INFO: Number of nodes with available pods: 0 May 14 23:49:34.109: INFO: Number of running nodes: 0, number of available pods: 0 May 14 23:49:34.117: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8835/daemonsets","resourceVersion":"4663440"},"items":null} May 14 23:49:34.120: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8835/pods","resourceVersion":"4663440"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:49:34.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8835" for this suite. • [SLOW TEST:26.969 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":13,"skipped":211,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:49:34.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 14 23:49:34.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71" in namespace "projected-7191" to be "Succeeded or Failed" May 14 23:49:34.275: INFO: Pod "downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946741ms May 14 23:49:36.391: INFO: Pod "downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118577601s May 14 23:49:38.395: INFO: Pod "downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122130789s STEP: Saw pod success May 14 23:49:38.395: INFO: Pod "downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71" satisfied condition "Succeeded or Failed" May 14 23:49:38.398: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71 container client-container: STEP: delete the pod May 14 23:49:38.543: INFO: Waiting for pod downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71 to disappear May 14 23:49:38.551: INFO: Pod downwardapi-volume-a4497345-cb39-4cb7-9ab0-274f17511a71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:49:38.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7191" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:49:38.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 23:49:41.842: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:49:42.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3025" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:49:42.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 14 23:49:42.534: INFO: Creating deployment "webserver-deployment" May 14 23:49:42.542: INFO: Waiting for observed generation 1 May 14 23:49:44.648: INFO: Waiting for all required pods to come up May 14 23:49:44.654: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 14 23:49:54.686: INFO: Waiting for deployment "webserver-deployment" to complete May 14 23:49:54.692: INFO: Updating deployment "webserver-deployment" with a non-existent image May 14 23:49:54.699: INFO: Updating deployment webserver-deployment May 14 23:49:54.699: INFO: Waiting for observed generation 2 May 14 23:49:56.714: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 14 23:49:56.717: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 14 23:49:56.720: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 14 23:49:56.727: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 14 23:49:56.727: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 14 23:49:56.730: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 14 23:49:56.734: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 14 23:49:56.734: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 14 23:49:56.741: INFO: Updating deployment webserver-deployment May 14 23:49:56.741: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 14 23:49:57.050: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 14 23:49:57.661: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 14 23:49:58.366: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6316 /apis/apps/v1/namespaces/deployment-6316/deployments/webserver-deployment 6d6c16f0-e643-42c3-aa27-b8d2dee88ca0 4663829 3 2020-05-14 23:49:42 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-14 23:49:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-14 23:49:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00241d4b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-14 23:49:55 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-14 23:49:57 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 14 23:49:58.503: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-6316 /apis/apps/v1/namespaces/deployment-6316/replicasets/webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 4663831 3 2020-05-14 23:49:54 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 6d6c16f0-e643-42c3-aa27-b8d2dee88ca0 0xc001867507 0xc001867508}] [] [{kube-controller-manager Update apps/v1 2020-05-14 23:49:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c16f0-e643-42c3-aa27-b8d2dee88ca0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001867588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 14 23:49:58.503: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 14 23:49:58.503: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-6316 /apis/apps/v1/namespaces/deployment-6316/replicasets/webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 4663816 3 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 6d6c16f0-e643-42c3-aa27-b8d2dee88ca0 0xc0018675e7 0xc0018675e8}] [] [{kube-controller-manager Update apps/v1 2020-05-14 23:49:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c16f0-e643-42c3-aa27-b8d2dee88ca0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001867658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 14 23:49:58.663: INFO: Pod "webserver-deployment-6676bcd6d4-28hdq" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-28hdq webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-28hdq 62a2e6eb-65d1-448e-9c7e-0c0693760f2b 4663725 0 2020-05-14 23:49:54 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc001867b87 0xc001867b88}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-14 23:49:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.663: INFO: Pod "webserver-deployment-6676bcd6d4-7kc2p" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7kc2p webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-7kc2p 57a1eaab-c5a9-4915-bc78-4e93b4e8a299 4663792 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc001867d37 0xc001867d38}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.664: INFO: Pod "webserver-deployment-6676bcd6d4-7mzk4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7mzk4 webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-7mzk4 d16ed0ee-2310-4a5c-8f58-de0b00b5b35b 4663721 0 2020-05-14 23:49:54 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc001867e77 0xc001867e78}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-14 23:49:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.664: INFO: Pod "webserver-deployment-6676bcd6d4-7prdg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7prdg webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-7prdg 0b23cdc6-dd20-4785-aa4a-f62d423e9103 4663747 0 2020-05-14 23:49:55 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e027 0xc00214e028}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-14 23:49:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.664: INFO: Pod "webserver-deployment-6676bcd6d4-9zkcz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9zkcz webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-9zkcz 4867e3eb-1ca8-42ca-8657-42080995a41e 4663811 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e1d7 0xc00214e1d8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.664: INFO: Pod "webserver-deployment-6676bcd6d4-b4h6h" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b4h6h webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-b4h6h ce7e020a-9a9a-41d9-af7e-a8fe29385e55 4663745 0 2020-05-14 23:49:55 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e317 0xc00214e318}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-14 23:49:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.664: INFO: Pod "webserver-deployment-6676bcd6d4-drcr7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-drcr7 webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-drcr7 bbad30ed-39ce-4796-89a2-4caf68790ab5 4663815 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e4c7 0xc00214e4c8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.665: INFO: Pod "webserver-deployment-6676bcd6d4-ghqqb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ghqqb webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-ghqqb c096dd79-d199-40fc-9c1c-970c90c6f434 4663791 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e607 0xc00214e608}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.665: INFO: Pod "webserver-deployment-6676bcd6d4-qqzgh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qqzgh webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-qqzgh 1f0aec61-1b12-4ca0-b6bd-b09c1fcd6586 4663809 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e747 0xc00214e748}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.665: INFO: Pod "webserver-deployment-6676bcd6d4-sg4tg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sg4tg webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-sg4tg 66c9c281-f44b-46ef-9c0b-11eac61811cc 4663772 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e887 0xc00214e888}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.666: INFO: Pod "webserver-deployment-6676bcd6d4-sm8v8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sm8v8 webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-sm8v8 6d5265f7-a74e-4eca-b066-4f982eb80a03 4663734 0 2020-05-14 23:49:54 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214e9c7 0xc00214e9c8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-14 23:49:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.666: INFO: Pod "webserver-deployment-6676bcd6d4-tjs7c" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tjs7c webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-tjs7c 5d52fef6-2014-4677-8bd0-3ddbaef5845d 4663805 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214eb77 0xc00214eb78}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.666: INFO: Pod "webserver-deployment-6676bcd6d4-w6x9q" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-w6x9q webserver-deployment-6676bcd6d4- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-6676bcd6d4-w6x9q 884fa3e6-7ddd-4032-9c39-4e70bb8557e7 4663808 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 dd227ba8-80e6-414e-bf7a-5beb024b2988 0xc00214ecb7 0xc00214ecb8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd227ba8-80e6-414e-bf7a-5beb024b2988\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.667: INFO: Pod "webserver-deployment-84855cf797-29bkg" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-29bkg webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-29bkg 7f778dc5-f6fd-4f61-aa5c-b77e45379fcf 4663685 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214edf7 0xc00214edf8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.100,StartTime:2020-05-14 23:49:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://05723d74e755c24339c835ca090fac1011098fd97e4efa4ff8f80e47625fbdbd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.667: INFO: Pod "webserver-deployment-84855cf797-7m466" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7m466 webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-7m466 ff00542f-3739-462d-96be-9c1dbde833f4 4663667 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214efa7 0xc00214efa8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.61,StartTime:2020-05-14 23:49:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f480c4acbe0fcd68c7cc55ad876134b7ea30d37ca8a63e131c06e34cb62baf3e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.668: INFO: Pod "webserver-deployment-84855cf797-c4d5h" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-c4d5h webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-c4d5h c6e287b0-fa8c-4457-abde-9bd66811d913 4663794 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214f157 0xc00214f158}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.668: INFO: Pod "webserver-deployment-84855cf797-cv4s7" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cv4s7 webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-cv4s7 600471cb-aadd-43bc-a7c9-4c0d9d6fd6da 4663634 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214f287 0xc00214f288}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.57,StartTime:2020-05-14 23:49:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://480e663c4e74b822b29aecc37d7c9ccb578cc6e3f629ab8539b8fe90f27e5b0f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.668: INFO: Pod "webserver-deployment-84855cf797-d8m9z" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-d8m9z webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-d8m9z ce1cf151-128c-467e-9280-46494977b3bb 4663646 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214f437 0xc00214f438}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.96,StartTime:2020-05-14 23:49:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f52006f12ea8a46db400443e795f05b439c519caef3a6f413e305b478e5addae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.668: INFO: Pod "webserver-deployment-84855cf797-dfwnl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dfwnl webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-dfwnl cf03380d-e24b-4e97-bd88-9eb472dfd833 4663810 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214f5e7 0xc00214f5e8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.669: INFO: Pod "webserver-deployment-84855cf797-kchnq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kchnq webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-kchnq 3d4b84cf-5603-4574-b253-25be46414e98 4663694 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214f717 0xc00214f718}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.97\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.97,StartTime:2020-05-14 23:49:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0d6ddeae8e798f8448c64cc34468c26bc8a18a1d92aa3c051af2d8b2199ef511,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.669: INFO: Pod "webserver-deployment-84855cf797-kgtdv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kgtdv webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-kgtdv fc6c8736-1d19-44e0-8b47-abe3ed5644f7 4663828 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214f8c7 0xc00214f8c8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-14 23:49:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.669: INFO: Pod "webserver-deployment-84855cf797-n5f87" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-n5f87 webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-n5f87 34e63914-ddad-4a86-b713-d3f523c6e9db 4663793 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214fa57 0xc00214fa58}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.670: INFO: Pod "webserver-deployment-84855cf797-qlzc7" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qlzc7 webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-qlzc7 b4bcf40f-dc5f-434a-8985-06100ca7b0be 4663656 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214fb87 0xc00214fb88}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.59,StartTime:2020-05-14 23:49:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d79d88a5c3e48a63a8db995fda168a56030e0970d2656e5d97a1fcab0b4af841,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.670: INFO: Pod "webserver-deployment-84855cf797-qzf2g" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qzf2g webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-qzf2g abcbfe21-0b83-4250-ae92-d1718dcc6212 4663839 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214fd37 0xc00214fd38}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-14 23:49:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.671: INFO: Pod "webserver-deployment-84855cf797-r7sz2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-r7sz2 webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-r7sz2 c7353e4c-aa2d-436f-8c3d-5407c97a3113 4663820 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc00214fec7 0xc00214fec8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-14 23:49:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.671: INFO: Pod "webserver-deployment-84855cf797-rqbw9" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rqbw9 webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-rqbw9 d5cafe52-1225-4198-a081-3630f00b3e6b 4663807 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc002332057 0xc002332058}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.671: INFO: Pod "webserver-deployment-84855cf797-rqhwx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rqhwx webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-rqhwx 1a19c893-ee7c-4d57-a247-04708b463499 4663812 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc002332187 0xc002332188}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.671: INFO: Pod "webserver-deployment-84855cf797-sf4cg" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-sf4cg webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-sf4cg 4e45de10-91a4-42cd-93fe-edbb45906834 4663806 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc0023322b7 0xc0023322b8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.672: INFO: Pod "webserver-deployment-84855cf797-svhmj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-svhmj webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-svhmj 99671e9b-3a67-44b2-b68a-38db929d6c4c 4663835 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc0023323e7 0xc0023323e8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-14 23:49:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.672: INFO: Pod "webserver-deployment-84855cf797-tjbrf" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tjbrf webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-tjbrf 87efc395-1978-4f14-a380-1b05c95b6007 4663787 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc002332577 0xc002332578}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.672: INFO: Pod "webserver-deployment-84855cf797-tk7rk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tk7rk webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-tk7rk 270109b7-6eb0-40be-bef4-04c41d4dfe71 4663644 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc0023326a7 0xc0023326a8}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.58,StartTime:2020-05-14 23:49:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9ac58c96e2bc3cad242e260d312afc3088a61845a2c8f2f2032e750aa2ff12bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.672: INFO: Pod "webserver-deployment-84855cf797-v7l4r" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-v7l4r webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-v7l4r 22f4cbc5-f9e5-4489-8c2d-3366842c065c 4663661 0 2020-05-14 23:49:42 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc002332857 0xc002332858}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-14 23:49:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.60,StartTime:2020-05-14 23:49:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-14 23:49:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1f5d61b00a724b0d51ee746d59c6259231c399e0336494ff71631dd8b82675c8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 23:49:58.673: INFO: Pod "webserver-deployment-84855cf797-wpf7h" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wpf7h webserver-deployment-84855cf797- deployment-6316 /api/v1/namespaces/deployment-6316/pods/webserver-deployment-84855cf797-wpf7h 0613abef-2ea6-43fa-ac2e-34187ad7073d 4663804 0 2020-05-14 23:49:57 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d357c9ae-dc97-4325-9190-b4f5dcc96032 0xc002332a07 0xc002332a08}] [] [{kube-controller-manager Update v1 2020-05-14 23:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d357c9ae-dc97-4325-9190-b4f5dcc96032\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pv8qn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pv8qn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pv8qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-14 23:49:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:49:58.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6316" for this suite. • [SLOW TEST:16.474 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":16,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:49:58.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:50:01.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5486" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":17,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:50:01.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 14 23:50:18.887: INFO: Successfully updated pod "annotationupdatee65b5f0e-8373-4077-b472-c0342fe83802" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:50:23.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4814" for this suite. • [SLOW TEST:21.722 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":333,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:50:23.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6621.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6621.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 23:50:31.491: INFO: DNS probes using dns-6621/dns-test-c9dfab17-75ae-4507-8897-15c58851f461 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:50:31.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6621" for this suite. • [SLOW TEST:8.555 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":19,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:50:31.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 14 23:50:32.129: INFO: Pod name pod-release: Found 0 pods out of 1 May 14 23:50:37.132: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:50:38.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3162" for this suite. • [SLOW TEST:6.479 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":20,"skipped":353,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:50:38.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3799.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3799.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3799.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3799.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3799.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3799.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 23:50:46.422: INFO: DNS probes using dns-3799/dns-test-15ddee91-33bb-4296-a5e6-83acd2410da4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:50:46.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3799" for this suite. • [SLOW TEST:8.339 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":21,"skipped":358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:50:46.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 14 23:50:46.734: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix163925473/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:50:46.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6947" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":22,"skipped":382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:50:46.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 14 23:50:47.111: INFO: Waiting up to 5m0s for pod "downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c" in namespace "downward-api-5084" to be "Succeeded or Failed" May 14 23:50:47.127: INFO: Pod "downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.808453ms May 14 23:50:49.131: INFO: Pod "downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020040359s May 14 23:50:51.134: INFO: Pod "downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c": Phase="Running", Reason="", readiness=true. Elapsed: 4.023058236s May 14 23:50:53.137: INFO: Pod "downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026161797s STEP: Saw pod success May 14 23:50:53.137: INFO: Pod "downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c" satisfied condition "Succeeded or Failed" May 14 23:50:53.139: INFO: Trying to get logs from node latest-worker pod downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c container dapi-container: STEP: delete the pod May 14 23:50:53.196: INFO: Waiting for pod downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c to disappear May 14 23:50:53.295: INFO: Pod downward-api-2ac618fc-4857-41fe-a8ed-8552dad11f0c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:50:53.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5084" for this suite. • [SLOW TEST:6.362 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":414,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:50:53.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4387/configmap-test-7159fffa-9814-4c38-9d89-56b879ed3f9e STEP: Creating a pod to test consume configMaps May 14 23:50:53.977: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033" in namespace "configmap-4387" to be "Succeeded or Failed" May 14 23:50:53.986: INFO: Pod "pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033": Phase="Pending", Reason="", readiness=false. Elapsed: 8.858705ms May 14 23:50:56.160: INFO: Pod "pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182998045s May 14 23:50:58.164: INFO: Pod "pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033": Phase="Running", Reason="", readiness=true. Elapsed: 4.18643515s May 14 23:51:00.167: INFO: Pod "pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190177387s STEP: Saw pod success May 14 23:51:00.167: INFO: Pod "pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033" satisfied condition "Succeeded or Failed" May 14 23:51:00.171: INFO: Trying to get logs from node latest-worker pod pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033 container env-test: STEP: delete the pod May 14 23:51:00.232: INFO: Waiting for pod pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033 to disappear May 14 23:51:00.247: INFO: Pod pod-configmaps-bb77f64e-4ba2-423f-9ff2-6dbcd43ee033 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:51:00.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4387" for this suite. • [SLOW TEST:6.954 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":24,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:51:00.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 14 23:51:00.370: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 23:51:00.381: INFO: Waiting for terminating namespaces to be deleted... May 14 23:51:00.384: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 14 23:51:00.389: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 14 23:51:00.389: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 14 23:51:00.389: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 14 23:51:00.389: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 14 23:51:00.389: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 14 23:51:00.389: INFO: Container kindnet-cni ready: true, restart count 0 May 14 23:51:00.389: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 14 23:51:00.389: INFO: Container kube-proxy ready: true, restart count 0 May 14 23:51:00.389: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 14 23:51:00.394: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 14 23:51:00.394: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 14 23:51:00.394: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 14 23:51:00.394: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 14 23:51:00.394: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 14 23:51:00.394: INFO: Container kindnet-cni ready: true, restart count 0 May 14 23:51:00.394: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 14 23:51:00.394: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5e8a139b-2068-460a-b12b-181d825bb11b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5e8a139b-2068-460a-b12b-181d825bb11b off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5e8a139b-2068-460a-b12b-181d825bb11b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:51:08.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1394" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.289 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":25,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:51:08.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3332.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3332.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.13.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.13.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.13.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.13.88_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3332.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3332.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3332.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3332.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3332.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.13.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.13.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.13.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.13.88_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 23:51:14.773: INFO: Unable to read wheezy_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.776: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.779: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.782: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.822: INFO: Unable to read jessie_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.824: INFO: Unable to read jessie_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.826: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.828: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:14.841: INFO: Lookups using dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e failed for: [wheezy_udp@dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_udp@dns-test-service.dns-3332.svc.cluster.local jessie_tcp@dns-test-service.dns-3332.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local] May 14 23:51:19.925: INFO: Unable to read wheezy_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.928: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.931: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.934: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.967: INFO: Unable to read jessie_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.971: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.974: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:19.987: INFO: Lookups using dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e failed for: [wheezy_udp@dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_udp@dns-test-service.dns-3332.svc.cluster.local jessie_tcp@dns-test-service.dns-3332.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local] May 14 23:51:24.845: INFO: Unable to read wheezy_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.851: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.853: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.872: INFO: Unable to read jessie_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.876: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.879: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:24.894: INFO: Lookups using dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e failed for: [wheezy_udp@dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_udp@dns-test-service.dns-3332.svc.cluster.local jessie_tcp@dns-test-service.dns-3332.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local] May 14 23:51:29.845: INFO: Unable to read wheezy_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.847: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.848: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.850: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.864: INFO: Unable to read jessie_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.866: INFO: Unable to read jessie_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.868: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.894: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:29.909: INFO: Lookups using dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e failed for: [wheezy_udp@dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_udp@dns-test-service.dns-3332.svc.cluster.local jessie_tcp@dns-test-service.dns-3332.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local] May 14 23:51:34.845: INFO: Unable to read wheezy_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.851: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.854: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.872: INFO: Unable to read jessie_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.876: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.879: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:34.894: INFO: Lookups using dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e failed for: [wheezy_udp@dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_udp@dns-test-service.dns-3332.svc.cluster.local jessie_tcp@dns-test-service.dns-3332.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local] May 14 23:51:39.847: INFO: Unable to read wheezy_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.851: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.854: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.857: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.876: INFO: Unable to read jessie_udp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.882: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.901: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local from pod dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e: the server could not find the requested resource (get pods dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e) May 14 23:51:39.920: INFO: Lookups using dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e failed for: [wheezy_udp@dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@dns-test-service.dns-3332.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_udp@dns-test-service.dns-3332.svc.cluster.local jessie_tcp@dns-test-service.dns-3332.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3332.svc.cluster.local] May 14 23:51:44.907: INFO: DNS probes using dns-3332/dns-test-935b15fc-67ee-456d-b4c1-cdf3f4f9291e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:51:45.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3332" for this suite. • [SLOW TEST:37.302 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":26,"skipped":467,"failed":0} SSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:51:45.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 14 23:51:45.919: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6954" to be "Succeeded or Failed" May 14 23:51:45.948: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 29.282172ms May 14 23:51:47.953: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033670268s May 14 23:51:50.066: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147522031s May 14 23:51:52.070: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151058887s STEP: Saw pod success May 14 23:51:52.070: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 14 23:51:52.072: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 14 23:51:52.104: INFO: Waiting for pod pod-host-path-test to disappear May 14 23:51:52.116: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:51:52.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6954" for this suite. • [SLOW TEST:6.278 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":27,"skipped":475,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:51:52.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 14 23:51:52.284: INFO: Waiting up to 1m0s for all nodes to be ready May 14 23:52:52.310: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:52:52.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 14 23:52:56.471: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 14 23:53:12.866: INFO: pods created so far: [1 1 1] May 14 23:53:12.866: INFO: length of pods created so far: 3 May 14 23:53:30.875: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:53:37.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4804" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:53:37.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1104" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:105.920 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":28,"skipped":479,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:53:38.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-966577cc-f8d5-4460-8fe2-bae4315742b9 STEP: Creating a pod to test consume configMaps May 14 23:53:38.126: INFO: Waiting up to 5m0s for pod "pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc" in namespace "configmap-2037" to be "Succeeded or Failed" May 14 23:53:38.130: INFO: Pod "pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223251ms May 14 23:53:40.183: INFO: Pod "pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057421227s May 14 23:53:42.220: INFO: Pod "pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094572076s STEP: Saw pod success May 14 23:53:42.221: INFO: Pod "pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc" satisfied condition "Succeeded or Failed" May 14 23:53:42.224: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc container configmap-volume-test: STEP: delete the pod May 14 23:53:42.270: INFO: Waiting for pod pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc to disappear May 14 23:53:42.411: INFO: Pod pod-configmaps-04376787-7f59-48dd-9d4b-43c2743f33bc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:53:42.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2037" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:53:42.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8591 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 23:53:42.500: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 14 23:53:42.585: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 23:53:44.776: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 23:53:46.588: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 14 23:53:48.588: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 23:53:50.590: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 23:53:52.590: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 23:53:54.590: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 23:53:56.589: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 23:53:58.589: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 23:54:00.588: INFO: The status of Pod netserver-0 is Running (Ready = false) May 14 23:54:02.590: INFO: The status of Pod netserver-0 is Running (Ready = true) May 14 23:54:02.596: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 14 23:54:06.668: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.86:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8591 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 23:54:06.668: INFO: >>> kubeConfig: /root/.kube/config I0514 23:54:06.705035 7 log.go:172] (0xc001b202c0) (0xc0020ca640) Create stream I0514 23:54:06.705072 7 log.go:172] (0xc001b202c0) (0xc0020ca640) Stream added, broadcasting: 1 I0514 23:54:06.707537 7 log.go:172] (0xc001b202c0) Reply frame received for 1 I0514 23:54:06.707588 7 log.go:172] (0xc001b202c0) (0xc001ae45a0) Create stream I0514 23:54:06.707605 7 log.go:172] (0xc001b202c0) (0xc001ae45a0) Stream added, broadcasting: 3 I0514 23:54:06.708822 7 log.go:172] (0xc001b202c0) Reply frame received for 3 I0514 23:54:06.708863 7 log.go:172] (0xc001b202c0) (0xc001cb0c80) Create stream I0514 23:54:06.708885 7 log.go:172] (0xc001b202c0) (0xc001cb0c80) Stream added, broadcasting: 5 I0514 23:54:06.710107 7 log.go:172] (0xc001b202c0) Reply frame received for 5 I0514 23:54:06.854388 7 log.go:172] (0xc001b202c0) Data frame received for 5 I0514 23:54:06.854432 7 log.go:172] (0xc001cb0c80) (5) Data frame handling I0514 23:54:06.854456 7 log.go:172] (0xc001b202c0) Data frame received for 3 I0514 23:54:06.854470 7 log.go:172] (0xc001ae45a0) (3) Data frame handling I0514 23:54:06.854482 7 log.go:172] (0xc001ae45a0) (3) Data frame sent I0514 23:54:06.854550 7 log.go:172] (0xc001b202c0) Data frame received for 3 I0514 23:54:06.854565 7 log.go:172] (0xc001ae45a0) (3) Data frame handling I0514 23:54:06.855996 7 log.go:172] (0xc001b202c0) Data frame received for 1 I0514 23:54:06.856016 7 log.go:172] (0xc0020ca640) (1) Data frame handling I0514 23:54:06.856039 7 log.go:172] (0xc0020ca640) (1) Data frame sent I0514 23:54:06.856051 7 log.go:172] (0xc001b202c0) (0xc0020ca640) Stream removed, broadcasting: 1 I0514 23:54:06.856119 7 log.go:172] (0xc001b202c0) (0xc0020ca640) Stream removed, broadcasting: 1 I0514 23:54:06.856129 7 log.go:172] (0xc001b202c0) (0xc001ae45a0) Stream removed, broadcasting: 3 I0514 23:54:06.856265 7 log.go:172] (0xc001b202c0) Go away received I0514 23:54:06.856286 7 log.go:172] (0xc001b202c0) (0xc001cb0c80) Stream removed, broadcasting: 5 May 14 23:54:06.856: INFO: Found all expected endpoints: [netserver-0] May 14 23:54:06.859: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.119:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8591 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 23:54:06.859: INFO: >>> kubeConfig: /root/.kube/config I0514 23:54:06.884114 7 log.go:172] (0xc001b208f0) (0xc0020caf00) Create stream I0514 23:54:06.884151 7 log.go:172] (0xc001b208f0) (0xc0020caf00) Stream added, broadcasting: 1 I0514 23:54:06.885837 7 log.go:172] (0xc001b208f0) Reply frame received for 1 I0514 23:54:06.885869 7 log.go:172] (0xc001b208f0) (0xc001fe4fa0) Create stream I0514 23:54:06.885880 7 log.go:172] (0xc001b208f0) (0xc001fe4fa0) Stream added, broadcasting: 3 I0514 23:54:06.886574 7 log.go:172] (0xc001b208f0) Reply frame received for 3 I0514 23:54:06.886600 7 log.go:172] (0xc001b208f0) (0xc001cb0d20) Create stream I0514 23:54:06.886610 7 log.go:172] (0xc001b208f0) (0xc001cb0d20) Stream added, broadcasting: 5 I0514 23:54:06.887411 7 log.go:172] (0xc001b208f0) Reply frame received for 5 I0514 23:54:06.972926 7 log.go:172] (0xc001b208f0) Data frame received for 3 I0514 23:54:06.972956 7 log.go:172] (0xc001fe4fa0) (3) Data frame handling I0514 23:54:06.972974 7 log.go:172] (0xc001fe4fa0) (3) Data frame sent I0514 23:54:06.972994 7 log.go:172] (0xc001b208f0) Data frame received for 3 I0514 23:54:06.973007 7 log.go:172] (0xc001fe4fa0) (3) Data frame handling I0514 23:54:06.973536 7 log.go:172] (0xc001b208f0) Data frame received for 5 I0514 23:54:06.973558 7 log.go:172] (0xc001cb0d20) (5) Data frame handling I0514 23:54:06.974952 7 log.go:172] (0xc001b208f0) Data frame received for 1 I0514 23:54:06.974978 7 log.go:172] (0xc0020caf00) (1) Data frame handling I0514 23:54:06.974992 7 log.go:172] (0xc0020caf00) (1) Data frame sent I0514 23:54:06.975013 7 log.go:172] (0xc001b208f0) (0xc0020caf00) Stream removed, broadcasting: 1 I0514 23:54:06.975029 7 log.go:172] (0xc001b208f0) Go away received I0514 23:54:06.975147 7 log.go:172] (0xc001b208f0) (0xc0020caf00) Stream removed, broadcasting: 1 I0514 23:54:06.975166 7 log.go:172] (0xc001b208f0) (0xc001fe4fa0) Stream removed, broadcasting: 3 I0514 23:54:06.975176 7 log.go:172] (0xc001b208f0) (0xc001cb0d20) Stream removed, broadcasting: 5 May 14 23:54:06.975: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:54:06.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8591" for this suite. • [SLOW TEST:24.561 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":30,"skipped":509,"failed":0} [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:54:06.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 14 23:54:11.575: INFO: Successfully updated pod "labelsupdate5ff891af-e22b-4a4d-80c7-d2b07e6ddb73" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:54:13.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1873" for this suite. • [SLOW TEST:6.993 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":509,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:54:13.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3195 STEP: creating service affinity-nodeport-transition in namespace services-3195 STEP: creating replication controller affinity-nodeport-transition in namespace services-3195 I0514 23:54:14.642972 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3195, replica count: 3 I0514 23:54:17.693356 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 23:54:20.693600 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 23:54:20.704: INFO: Creating new exec pod May 14 23:54:25.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3195 execpod-affinity6p4jl -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 14 23:54:29.354: INFO: stderr: "I0514 23:54:29.225959 103 log.go:172] (0xc0005e4160) (0xc0006fc5a0) Create stream\nI0514 23:54:29.226004 103 log.go:172] (0xc0005e4160) (0xc0006fc5a0) Stream added, broadcasting: 1\nI0514 23:54:29.227614 103 log.go:172] (0xc0005e4160) Reply frame received for 1\nI0514 23:54:29.227650 103 log.go:172] (0xc0005e4160) (0xc000831f40) Create stream\nI0514 23:54:29.227662 103 log.go:172] (0xc0005e4160) (0xc000831f40) Stream added, broadcasting: 3\nI0514 23:54:29.228488 103 log.go:172] (0xc0005e4160) Reply frame received for 3\nI0514 23:54:29.228535 103 log.go:172] (0xc0005e4160) (0xc0006fce60) Create stream\nI0514 23:54:29.228551 103 log.go:172] (0xc0005e4160) (0xc0006fce60) Stream added, broadcasting: 5\nI0514 23:54:29.229621 103 log.go:172] (0xc0005e4160) Reply frame received for 5\nI0514 23:54:29.328803 103 log.go:172] (0xc0005e4160) Data frame received for 5\nI0514 23:54:29.328855 103 log.go:172] (0xc0006fce60) (5) Data frame handling\nI0514 23:54:29.328879 103 log.go:172] (0xc0006fce60) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0514 23:54:29.347283 103 log.go:172] (0xc0005e4160) Data frame received for 5\nI0514 23:54:29.347315 103 log.go:172] (0xc0006fce60) (5) Data frame handling\nI0514 23:54:29.347333 103 log.go:172] (0xc0006fce60) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0514 23:54:29.347488 103 log.go:172] (0xc0005e4160) Data frame received for 3\nI0514 23:54:29.347512 103 log.go:172] (0xc000831f40) (3) Data frame handling\nI0514 23:54:29.347711 103 log.go:172] (0xc0005e4160) Data frame received for 5\nI0514 23:54:29.347742 103 log.go:172] (0xc0006fce60) (5) Data frame handling\nI0514 23:54:29.348979 103 log.go:172] (0xc0005e4160) Data frame received for 1\nI0514 23:54:29.349003 103 log.go:172] (0xc0006fc5a0) (1) Data frame handling\nI0514 23:54:29.349020 103 log.go:172] (0xc0006fc5a0) (1) Data frame sent\nI0514 23:54:29.349033 103 log.go:172] (0xc0005e4160) (0xc0006fc5a0) Stream removed, broadcasting: 1\nI0514 23:54:29.349055 103 log.go:172] (0xc0005e4160) Go away received\nI0514 23:54:29.349579 103 log.go:172] (0xc0005e4160) (0xc0006fc5a0) Stream removed, broadcasting: 1\nI0514 23:54:29.349601 103 log.go:172] (0xc0005e4160) (0xc000831f40) Stream removed, broadcasting: 3\nI0514 23:54:29.349610 103 log.go:172] (0xc0005e4160) (0xc0006fce60) Stream removed, broadcasting: 5\n" May 14 23:54:29.354: INFO: stdout: "" May 14 23:54:29.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3195 execpod-affinity6p4jl -- /bin/sh -x -c nc -zv -t -w 2 10.108.199.255 80' May 14 23:54:29.577: INFO: stderr: "I0514 23:54:29.496918 134 log.go:172] (0xc000a86790) (0xc0004e6a00) Create stream\nI0514 23:54:29.496976 134 log.go:172] (0xc000a86790) (0xc0004e6a00) Stream added, broadcasting: 1\nI0514 23:54:29.499422 134 log.go:172] (0xc000a86790) Reply frame received for 1\nI0514 23:54:29.499472 134 log.go:172] (0xc000a86790) (0xc0000ddea0) Create stream\nI0514 23:54:29.499489 134 log.go:172] (0xc000a86790) (0xc0000ddea0) Stream added, broadcasting: 3\nI0514 23:54:29.500451 134 log.go:172] (0xc000a86790) Reply frame received for 3\nI0514 23:54:29.500494 134 log.go:172] (0xc000a86790) (0xc000234140) Create stream\nI0514 23:54:29.500508 134 log.go:172] (0xc000a86790) (0xc000234140) Stream added, broadcasting: 5\nI0514 23:54:29.501432 134 log.go:172] (0xc000a86790) Reply frame received for 5\nI0514 23:54:29.570003 134 log.go:172] (0xc000a86790) Data frame received for 5\nI0514 23:54:29.570059 134 log.go:172] (0xc000a86790) Data frame received for 3\nI0514 23:54:29.570094 134 log.go:172] (0xc0000ddea0) (3) Data frame handling\nI0514 23:54:29.570129 134 log.go:172] (0xc000234140) (5) Data frame handling\nI0514 23:54:29.570152 134 log.go:172] (0xc000234140) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.199.255 80\nConnection to 10.108.199.255 80 port [tcp/http] succeeded!\nI0514 23:54:29.570172 134 log.go:172] (0xc000a86790) Data frame received for 5\nI0514 23:54:29.570216 134 log.go:172] (0xc000234140) (5) Data frame handling\nI0514 23:54:29.571672 134 log.go:172] (0xc000a86790) Data frame received for 1\nI0514 23:54:29.571703 134 log.go:172] (0xc0004e6a00) (1) Data frame handling\nI0514 23:54:29.571728 134 log.go:172] (0xc0004e6a00) (1) Data frame sent\nI0514 23:54:29.571753 134 log.go:172] (0xc000a86790) (0xc0004e6a00) Stream removed, broadcasting: 1\nI0514 23:54:29.571920 134 log.go:172] (0xc000a86790) Go away received\nI0514 23:54:29.572186 134 log.go:172] (0xc000a86790) (0xc0004e6a00) Stream removed, broadcasting: 1\nI0514 23:54:29.572212 134 log.go:172] (0xc000a86790) (0xc0000ddea0) Stream removed, broadcasting: 3\nI0514 23:54:29.572226 134 log.go:172] (0xc000a86790) (0xc000234140) Stream removed, broadcasting: 5\n" May 14 23:54:29.577: INFO: stdout: "" May 14 23:54:29.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3195 execpod-affinity6p4jl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32219' May 14 23:54:29.805: INFO: stderr: "I0514 23:54:29.721677 155 log.go:172] (0xc000acd3f0) (0xc000b92320) Create stream\nI0514 23:54:29.721735 155 log.go:172] (0xc000acd3f0) (0xc000b92320) Stream added, broadcasting: 1\nI0514 23:54:29.727153 155 log.go:172] (0xc000acd3f0) Reply frame received for 1\nI0514 23:54:29.727187 155 log.go:172] (0xc000acd3f0) (0xc000724aa0) Create stream\nI0514 23:54:29.727195 155 log.go:172] (0xc000acd3f0) (0xc000724aa0) Stream added, broadcasting: 3\nI0514 23:54:29.727976 155 log.go:172] (0xc000acd3f0) Reply frame received for 3\nI0514 23:54:29.727994 155 log.go:172] (0xc000acd3f0) (0xc0004aedc0) Create stream\nI0514 23:54:29.728003 155 log.go:172] (0xc000acd3f0) (0xc0004aedc0) Stream added, broadcasting: 5\nI0514 23:54:29.728794 155 log.go:172] (0xc000acd3f0) Reply frame received for 5\nI0514 23:54:29.798079 155 log.go:172] (0xc000acd3f0) Data frame received for 3\nI0514 23:54:29.798113 155 log.go:172] (0xc000724aa0) (3) Data frame handling\nI0514 23:54:29.798216 155 log.go:172] (0xc000acd3f0) Data frame received for 5\nI0514 23:54:29.798247 155 log.go:172] (0xc0004aedc0) (5) Data frame handling\nI0514 23:54:29.798267 155 log.go:172] (0xc0004aedc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32219\nConnection to 172.17.0.13 32219 port [tcp/32219] succeeded!\nI0514 23:54:29.798403 155 log.go:172] (0xc000acd3f0) Data frame received for 5\nI0514 23:54:29.798436 155 log.go:172] (0xc0004aedc0) (5) Data frame handling\nI0514 23:54:29.799967 155 log.go:172] (0xc000acd3f0) Data frame received for 1\nI0514 23:54:29.799993 155 log.go:172] (0xc000b92320) (1) Data frame handling\nI0514 23:54:29.800008 155 log.go:172] (0xc000b92320) (1) Data frame sent\nI0514 23:54:29.800023 155 log.go:172] (0xc000acd3f0) (0xc000b92320) Stream removed, broadcasting: 1\nI0514 23:54:29.800039 155 log.go:172] (0xc000acd3f0) Go away received\nI0514 23:54:29.800434 155 log.go:172] (0xc000acd3f0) (0xc000b92320) Stream removed, broadcasting: 1\nI0514 23:54:29.800456 155 log.go:172] (0xc000acd3f0) (0xc000724aa0) Stream removed, broadcasting: 3\nI0514 23:54:29.800472 155 log.go:172] (0xc000acd3f0) (0xc0004aedc0) Stream removed, broadcasting: 5\n" May 14 23:54:29.805: INFO: stdout: "" May 14 23:54:29.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3195 execpod-affinity6p4jl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32219' May 14 23:54:30.018: INFO: stderr: "I0514 23:54:29.940641 177 log.go:172] (0xc000a99130) (0xc000c1a140) Create stream\nI0514 23:54:29.940728 177 log.go:172] (0xc000a99130) (0xc000c1a140) Stream added, broadcasting: 1\nI0514 23:54:29.944575 177 log.go:172] (0xc000a99130) Reply frame received for 1\nI0514 23:54:29.944615 177 log.go:172] (0xc000a99130) (0xc00071cfa0) Create stream\nI0514 23:54:29.944634 177 log.go:172] (0xc000a99130) (0xc00071cfa0) Stream added, broadcasting: 3\nI0514 23:54:29.945686 177 log.go:172] (0xc000a99130) Reply frame received for 3\nI0514 23:54:29.945733 177 log.go:172] (0xc000a99130) (0xc0006b7040) Create stream\nI0514 23:54:29.945746 177 log.go:172] (0xc000a99130) (0xc0006b7040) Stream added, broadcasting: 5\nI0514 23:54:29.946594 177 log.go:172] (0xc000a99130) Reply frame received for 5\nI0514 23:54:30.009424 177 log.go:172] (0xc000a99130) Data frame received for 5\nI0514 23:54:30.009463 177 log.go:172] (0xc0006b7040) (5) Data frame handling\nI0514 23:54:30.009495 177 log.go:172] (0xc0006b7040) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32219\nI0514 23:54:30.010174 177 log.go:172] (0xc000a99130) Data frame received for 5\nI0514 23:54:30.010191 177 log.go:172] (0xc0006b7040) (5) Data frame handling\nI0514 23:54:30.010207 177 log.go:172] (0xc0006b7040) (5) Data frame sent\nConnection to 172.17.0.12 32219 port [tcp/32219] succeeded!\nI0514 23:54:30.010683 177 log.go:172] (0xc000a99130) Data frame received for 5\nI0514 23:54:30.010699 177 log.go:172] (0xc0006b7040) (5) Data frame handling\nI0514 23:54:30.010717 177 log.go:172] (0xc000a99130) Data frame received for 3\nI0514 23:54:30.010729 177 log.go:172] (0xc00071cfa0) (3) Data frame handling\nI0514 23:54:30.012283 177 log.go:172] (0xc000a99130) Data frame received for 1\nI0514 23:54:30.012300 177 log.go:172] (0xc000c1a140) (1) Data frame handling\nI0514 23:54:30.012315 177 log.go:172] (0xc000c1a140) (1) Data frame sent\nI0514 23:54:30.012336 177 log.go:172] (0xc000a99130) (0xc000c1a140) Stream removed, broadcasting: 1\nI0514 23:54:30.012364 177 log.go:172] (0xc000a99130) Go away received\nI0514 23:54:30.012875 177 log.go:172] (0xc000a99130) (0xc000c1a140) Stream removed, broadcasting: 1\nI0514 23:54:30.012899 177 log.go:172] (0xc000a99130) (0xc00071cfa0) Stream removed, broadcasting: 3\nI0514 23:54:30.012912 177 log.go:172] (0xc000a99130) (0xc0006b7040) Stream removed, broadcasting: 5\n" May 14 23:54:30.018: INFO: stdout: "" May 14 23:54:30.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3195 execpod-affinity6p4jl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32219/ ; done' May 14 23:54:30.334: INFO: stderr: "I0514 23:54:30.161654 196 log.go:172] (0xc00094f340) (0xc0007d34a0) Create stream\nI0514 23:54:30.161707 196 log.go:172] (0xc00094f340) (0xc0007d34a0) Stream added, broadcasting: 1\nI0514 23:54:30.164515 196 log.go:172] (0xc00094f340) Reply frame received for 1\nI0514 23:54:30.164561 196 log.go:172] (0xc00094f340) (0xc0007d3ea0) Create stream\nI0514 23:54:30.164588 196 log.go:172] (0xc00094f340) (0xc0007d3ea0) Stream added, broadcasting: 3\nI0514 23:54:30.165699 196 log.go:172] (0xc00094f340) Reply frame received for 3\nI0514 23:54:30.165736 196 log.go:172] (0xc00094f340) (0xc000834e60) Create stream\nI0514 23:54:30.165749 196 log.go:172] (0xc00094f340) (0xc000834e60) Stream added, broadcasting: 5\nI0514 23:54:30.166970 196 log.go:172] (0xc00094f340) Reply frame received for 5\nI0514 23:54:30.234462 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.234508 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.234540 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.234624 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.234648 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.234671 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.239866 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.239888 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.239904 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.240704 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.240739 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.240750 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.240768 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.240776 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.240786 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.246192 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.246210 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.246227 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.246612 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.246648 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.246676 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.246693 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.246719 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.246745 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.253868 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.253897 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.253921 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.254563 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.254592 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.254620 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.254635 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.254642 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.254655 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.260396 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.260420 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.260440 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.260939 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.260959 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.260969 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.260983 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.260990 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.260997 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.264620 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.264642 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.264662 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.265875 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.265897 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.265905 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.265926 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.265972 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.266011 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.274572 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.274603 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.274626 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.275144 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.275186 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.275209 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.275244 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.275286 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.275315 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.279494 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.279508 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.279520 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.279842 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.279855 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.279861 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.279868 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.279872 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.279884 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.284640 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.284655 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.284668 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.285259 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.285283 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.285290 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.285301 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.285306 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.285312 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.291130 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.291155 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.291186 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.291791 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.291817 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.291830 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.291856 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.291870 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.291881 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.297284 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.297306 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.297315 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.297832 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.297853 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.297860 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.297874 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.297880 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.297888 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.302416 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.302435 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.302456 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.302900 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.302930 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.302947 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.302966 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.302974 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.302987 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.308604 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.308619 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.308629 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.309061 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.309087 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.309098 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.309279 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.309301 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.309326 196 log.go:172] (0xc000834e60) (5) Data frame sent\nI0514 23:54:30.309350 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.309365 196 log.go:172] (0xc000834e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.309383 196 log.go:172] (0xc000834e60) (5) Data frame sent\nI0514 23:54:30.312526 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.312545 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.312568 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.313047 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.313067 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.313087 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.313302 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.313329 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.313367 196 log.go:172] (0xc000834e60) (5) Data frame sent\nI0514 23:54:30.313392 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.313404 196 log.go:172] (0xc000834e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.313429 196 log.go:172] (0xc000834e60) (5) Data frame sent\nI0514 23:54:30.317385 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.317415 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.317444 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.317878 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.317896 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.317933 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.317981 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.318006 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.318035 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.321753 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.321781 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.321808 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.322055 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.322072 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.322083 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.322110 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.322136 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.322158 196 log.go:172] (0xc000834e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.326071 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.326090 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.326106 196 log.go:172] (0xc0007d3ea0) (3) Data frame sent\nI0514 23:54:30.326845 196 log.go:172] (0xc00094f340) Data frame received for 5\nI0514 23:54:30.326916 196 log.go:172] (0xc000834e60) (5) Data frame handling\nI0514 23:54:30.326943 196 log.go:172] (0xc00094f340) Data frame received for 3\nI0514 23:54:30.326958 196 log.go:172] (0xc0007d3ea0) (3) Data frame handling\nI0514 23:54:30.328751 196 log.go:172] (0xc00094f340) Data frame received for 1\nI0514 23:54:30.328771 196 log.go:172] (0xc0007d34a0) (1) Data frame handling\nI0514 23:54:30.328800 196 log.go:172] (0xc0007d34a0) (1) Data frame sent\nI0514 23:54:30.328826 196 log.go:172] (0xc00094f340) (0xc0007d34a0) Stream removed, broadcasting: 1\nI0514 23:54:30.329291 196 log.go:172] (0xc00094f340) Go away received\nI0514 23:54:30.329746 196 log.go:172] (0xc00094f340) (0xc0007d34a0) Stream removed, broadcasting: 1\nI0514 23:54:30.329779 196 log.go:172] (0xc00094f340) (0xc0007d3ea0) Stream removed, broadcasting: 3\nI0514 23:54:30.329791 196 log.go:172] (0xc00094f340) (0xc000834e60) Stream removed, broadcasting: 5\n" May 14 23:54:30.335: INFO: stdout: "\naffinity-nodeport-transition-nf8gl\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-nf8gl\naffinity-nodeport-transition-nf8gl\naffinity-nodeport-transition-nf8gl\naffinity-nodeport-transition-tfrm8\naffinity-nodeport-transition-tfrm8\naffinity-nodeport-transition-nf8gl\naffinity-nodeport-transition-nf8gl\naffinity-nodeport-transition-tfrm8\naffinity-nodeport-transition-tfrm8\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-tfrm8\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9" May 14 23:54:30.335: INFO: Received response from host: May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-nf8gl May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-nf8gl May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-nf8gl May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-nf8gl May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-tfrm8 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-tfrm8 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-nf8gl May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-nf8gl May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-tfrm8 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-tfrm8 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-tfrm8 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.335: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3195 execpod-affinity6p4jl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32219/ ; done' May 14 23:54:30.639: INFO: stderr: "I0514 23:54:30.477472 215 log.go:172] (0xc000a7ce70) (0xc000149900) Create stream\nI0514 23:54:30.477527 215 log.go:172] (0xc000a7ce70) (0xc000149900) Stream added, broadcasting: 1\nI0514 23:54:30.479383 215 log.go:172] (0xc000a7ce70) Reply frame received for 1\nI0514 23:54:30.479415 215 log.go:172] (0xc000a7ce70) (0xc000385d60) Create stream\nI0514 23:54:30.479424 215 log.go:172] (0xc000a7ce70) (0xc000385d60) Stream added, broadcasting: 3\nI0514 23:54:30.480182 215 log.go:172] (0xc000a7ce70) Reply frame received for 3\nI0514 23:54:30.480218 215 log.go:172] (0xc000a7ce70) (0xc000b50280) Create stream\nI0514 23:54:30.480232 215 log.go:172] (0xc000a7ce70) (0xc000b50280) Stream added, broadcasting: 5\nI0514 23:54:30.481030 215 log.go:172] (0xc000a7ce70) Reply frame received for 5\nI0514 23:54:30.539940 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.539966 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.539991 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.540027 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.540038 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.540067 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.544003 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.544018 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.544032 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.544558 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.544573 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.544590 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.544598 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.544606 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.544617 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.548879 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.548893 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.548905 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.549561 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.549585 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.549595 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.549614 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.549621 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.549631 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.555209 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.555223 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.555240 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.555882 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.555912 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.555923 215 log.go:172] (0xc000b50280) (5) Data frame sent\nI0514 23:54:30.555931 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.555938 215 log.go:172] (0xc000b50280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.555956 215 log.go:172] (0xc000b50280) (5) Data frame sent\nI0514 23:54:30.555965 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.555972 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.555984 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.561587 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.561605 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.561622 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.562176 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.562201 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.562222 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.562295 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.562337 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.562372 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.566828 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.566846 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.566855 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.567626 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.567653 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.567679 215 log.go:172] (0xc000b50280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.567702 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.567725 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.567740 215 log.go:172] (0xc000b50280) (5) Data frame sent\nI0514 23:54:30.571118 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.571142 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.571161 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.571283 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.571311 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.571329 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.571353 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.571380 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.571399 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.578406 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.578437 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.578458 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.578740 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.578759 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.578774 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.578791 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.578801 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.578812 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.582721 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.582765 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.582799 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.583210 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.583241 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.583254 215 log.go:172] (0xc000b50280) (5) Data frame sent\nI0514 23:54:30.583265 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.583279 215 log.go:172] (0xc000b50280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.583297 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.583331 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.583356 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.583378 215 log.go:172] (0xc000b50280) (5) Data frame sent\nI0514 23:54:30.590327 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.590346 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.590365 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.590765 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.590803 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.590840 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.590858 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.590867 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.590888 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.597269 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.597335 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.597343 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.598001 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.598017 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.598026 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.598033 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.598052 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.598065 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.602953 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.602975 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.603006 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.603361 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.603376 215 log.go:172] (0xc000b50280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.603398 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.603437 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.603457 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.603487 215 log.go:172] (0xc000b50280) (5) Data frame sent\nI0514 23:54:30.609554 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.609566 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.609572 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.610124 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.610136 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.610142 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.610213 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.610230 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.610245 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.613563 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.613595 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.613647 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.613779 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.613789 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.613794 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.613807 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.613816 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.613823 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.617436 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.617469 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.617496 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.617872 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.617901 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.617955 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.617976 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.618015 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.618045 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.622058 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.622074 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.622093 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.622662 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.622679 215 log.go:172] (0xc000b50280) (5) Data frame handling\n+ echo\nI0514 23:54:30.622793 215 log.go:172] (0xc000b50280) (5) Data frame sent\nI0514 23:54:30.623330 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.623362 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.623385 215 log.go:172] (0xc000b50280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32219/\nI0514 23:54:30.623486 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.623510 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.623529 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.631498 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.631533 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.631556 215 log.go:172] (0xc000385d60) (3) Data frame sent\nI0514 23:54:30.632265 215 log.go:172] (0xc000a7ce70) Data frame received for 3\nI0514 23:54:30.632295 215 log.go:172] (0xc000385d60) (3) Data frame handling\nI0514 23:54:30.632657 215 log.go:172] (0xc000a7ce70) Data frame received for 5\nI0514 23:54:30.632685 215 log.go:172] (0xc000b50280) (5) Data frame handling\nI0514 23:54:30.634257 215 log.go:172] (0xc000a7ce70) Data frame received for 1\nI0514 23:54:30.634308 215 log.go:172] (0xc000149900) (1) Data frame handling\nI0514 23:54:30.634347 215 log.go:172] (0xc000149900) (1) Data frame sent\nI0514 23:54:30.634378 215 log.go:172] (0xc000a7ce70) (0xc000149900) Stream removed, broadcasting: 1\nI0514 23:54:30.634644 215 log.go:172] (0xc000a7ce70) Go away received\nI0514 23:54:30.634704 215 log.go:172] (0xc000a7ce70) (0xc000149900) Stream removed, broadcasting: 1\nI0514 23:54:30.634720 215 log.go:172] (0xc000a7ce70) (0xc000385d60) Stream removed, broadcasting: 3\nI0514 23:54:30.634727 215 log.go:172] (0xc000a7ce70) (0xc000b50280) Stream removed, broadcasting: 5\n" May 14 23:54:30.639: INFO: stdout: "\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9\naffinity-nodeport-transition-bcmx9" May 14 23:54:30.640: INFO: Received response from host: May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Received response from host: affinity-nodeport-transition-bcmx9 May 14 23:54:30.640: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-3195, will wait for the garbage collector to delete the pods May 14 23:54:30.756: INFO: Deleting ReplicationController affinity-nodeport-transition took: 21.342499ms May 14 23:54:31.356: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.272225ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:54:45.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3195" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:31.439 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":32,"skipped":512,"failed":0} SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:54:45.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 14 23:54:45.471: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:54:49.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7461" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":33,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:54:49.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 14 23:54:56.145: INFO: Successfully updated pod "adopt-release-6fg5l" STEP: Checking that the Job readopts the Pod May 14 23:54:56.146: INFO: Waiting up to 15m0s for pod "adopt-release-6fg5l" in namespace "job-3587" to be "adopted" May 14 23:54:56.188: INFO: Pod "adopt-release-6fg5l": Phase="Running", Reason="", readiness=true. Elapsed: 42.471229ms May 14 23:54:58.192: INFO: Pod "adopt-release-6fg5l": Phase="Running", Reason="", readiness=true. Elapsed: 2.046455031s May 14 23:54:58.192: INFO: Pod "adopt-release-6fg5l" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 14 23:54:58.710: INFO: Successfully updated pod "adopt-release-6fg5l" STEP: Checking that the Job releases the Pod May 14 23:54:58.710: INFO: Waiting up to 15m0s for pod "adopt-release-6fg5l" in namespace "job-3587" to be "released" May 14 23:54:58.751: INFO: Pod "adopt-release-6fg5l": Phase="Running", Reason="", readiness=true. Elapsed: 41.479299ms May 14 23:55:00.894: INFO: Pod "adopt-release-6fg5l": Phase="Running", Reason="", readiness=true. Elapsed: 2.184403501s May 14 23:55:00.894: INFO: Pod "adopt-release-6fg5l" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:55:00.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3587" for this suite. • [SLOW TEST:11.556 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":34,"skipped":551,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:55:01.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 23:55:01.531: INFO: Waiting up to 5m0s for pod "pod-708a9a5f-6eee-49de-ab40-8888b165de28" in namespace "emptydir-2258" to be "Succeeded or Failed" May 14 23:55:01.554: INFO: Pod "pod-708a9a5f-6eee-49de-ab40-8888b165de28": Phase="Pending", Reason="", readiness=false. Elapsed: 22.360429ms May 14 23:55:03.556: INFO: Pod "pod-708a9a5f-6eee-49de-ab40-8888b165de28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025224822s May 14 23:55:05.560: INFO: Pod "pod-708a9a5f-6eee-49de-ab40-8888b165de28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028454086s STEP: Saw pod success May 14 23:55:05.560: INFO: Pod "pod-708a9a5f-6eee-49de-ab40-8888b165de28" satisfied condition "Succeeded or Failed" May 14 23:55:05.562: INFO: Trying to get logs from node latest-worker pod pod-708a9a5f-6eee-49de-ab40-8888b165de28 container test-container: STEP: delete the pod May 14 23:55:05.596: INFO: Waiting for pod pod-708a9a5f-6eee-49de-ab40-8888b165de28 to disappear May 14 23:55:05.631: INFO: Pod pod-708a9a5f-6eee-49de-ab40-8888b165de28 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:55:05.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2258" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":569,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:55:05.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-a4b8aa91-0aa8-47cd-a6aa-38f437874d81 STEP: Creating a pod to test consume configMaps May 14 23:55:05.926: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a" in namespace "projected-7634" to be "Succeeded or Failed" May 14 23:55:05.940: INFO: Pod "pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031457ms May 14 23:55:08.037: INFO: Pod "pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111155915s May 14 23:55:10.044: INFO: Pod "pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117822687s STEP: Saw pod success May 14 23:55:10.044: INFO: Pod "pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a" satisfied condition "Succeeded or Failed" May 14 23:55:10.047: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a container projected-configmap-volume-test: STEP: delete the pod May 14 23:55:10.114: INFO: Waiting for pod pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a to disappear May 14 23:55:10.122: INFO: Pod pod-projected-configmaps-9c723fae-d5fe-49d5-bcd6-52954b8d217a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:55:10.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7634" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":36,"skipped":571,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:55:10.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-6f38ce95-784c-400b-8fad-255d3b3bf15f in namespace container-probe-737 May 14 23:55:14.304: INFO: Started pod busybox-6f38ce95-784c-400b-8fad-255d3b3bf15f in namespace container-probe-737 STEP: checking the pod's current state and verifying that restartCount is present May 14 23:55:14.307: INFO: Initial restart count of pod busybox-6f38ce95-784c-400b-8fad-255d3b3bf15f is 0 May 14 23:56:08.608: INFO: Restart count of pod container-probe-737/busybox-6f38ce95-784c-400b-8fad-255d3b3bf15f is now 1 (54.301358221s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:56:08.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-737" for this suite. • [SLOW TEST:58.538 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:56:08.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:56:08.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5491" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":38,"skipped":596,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:56:08.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-83f34154-ae0a-4825-b2e9-cd5bb6771bf2 STEP: Creating a pod to test consume configMaps May 14 23:56:09.330: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223" in namespace "configmap-9330" to be "Succeeded or Failed" May 14 23:56:09.388: INFO: Pod "pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223": Phase="Pending", Reason="", readiness=false. Elapsed: 58.281489ms May 14 23:56:11.496: INFO: Pod "pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165655133s May 14 23:56:13.501: INFO: Pod "pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170584788s STEP: Saw pod success May 14 23:56:13.501: INFO: Pod "pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223" satisfied condition "Succeeded or Failed" May 14 23:56:13.503: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223 container configmap-volume-test: STEP: delete the pod May 14 23:56:13.660: INFO: Waiting for pod pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223 to disappear May 14 23:56:13.694: INFO: Pod pod-configmaps-7ba026a3-40ab-4f69-a04a-bb336f971223 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:56:13.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9330" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:56:13.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-03282c1a-fae9-466b-8c92-381c98144333 STEP: Creating a pod to test consume configMaps May 14 23:56:13.810: INFO: Waiting up to 5m0s for pod "pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f" in namespace "configmap-1298" to be "Succeeded or Failed" May 14 23:56:13.812: INFO: Pod "pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138469ms May 14 23:56:15.982: INFO: Pod "pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172024811s May 14 23:56:17.985: INFO: Pod "pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f": Phase="Running", Reason="", readiness=true. Elapsed: 4.175760422s May 14 23:56:19.988: INFO: Pod "pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.178262657s STEP: Saw pod success May 14 23:56:19.988: INFO: Pod "pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f" satisfied condition "Succeeded or Failed" May 14 23:56:19.990: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f container configmap-volume-test: STEP: delete the pod May 14 23:56:20.042: INFO: Waiting for pod pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f to disappear May 14 23:56:20.136: INFO: Pod pod-configmaps-096226f1-cfb6-45fd-8259-8d9eb7033b0f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 14 23:56:20.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1298" for this suite. • [SLOW TEST:6.456 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":634,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 14 23:56:20.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 14 23:56:20.231: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 23:56:20.316: INFO: Waiting for terminating namespaces to be deleted... May 14 23:56:20.319: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 14 23:56:20.329: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 14 23:56:20.329: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 14 23:56:20.329: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 14 23:56:20.329: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 14 23:56:20.329: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 14 23:56:20.329: INFO: Container kindnet-cni ready: true, restart count 0 May 14 23:56:20.329: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 14 23:56:20.329: INFO: Container kube-proxy ready: true, restart count 0 May 14 23:56:20.329: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 14 23:56:20.332: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 14 23:56:20.332: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 14 23:56:20.332: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 14 23:56:20.332: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 14 23:56:20.332: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 14 23:56:20.332: INFO: Container kindnet-cni ready: true, restart count 0 May 14 23:56:20.332: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 14 23:56:20.332: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3c68b58d-5549-41b4-85f7-5eba11fd3982 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-3c68b58d-5549-41b4-85f7-5eba11fd3982 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3c68b58d-5549-41b4-85f7-5eba11fd3982 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:01:28.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7808" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.452 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":41,"skipped":634,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:01:28.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:01:46.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4112" for this suite. • [SLOW TEST:18.100 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":42,"skipped":647,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:01:46.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:01:47.954: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:01:49.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097707, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097708, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097707, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:01:52.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097707, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097707, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097708, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097707, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:01:55.057: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 15 00:01:55.081: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:01:55.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3991" for this suite. STEP: Destroying namespace "webhook-3991-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.466 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":43,"skipped":647,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:01:55.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:01:55.265: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ba746887-3833-4c34-9c9b-7c5b1364aa02" in namespace "security-context-test-8280" to be "Succeeded or Failed" May 15 00:01:55.290: INFO: Pod "busybox-readonly-false-ba746887-3833-4c34-9c9b-7c5b1364aa02": Phase="Pending", Reason="", readiness=false. Elapsed: 24.741459ms May 15 00:01:57.293: INFO: Pod "busybox-readonly-false-ba746887-3833-4c34-9c9b-7c5b1364aa02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028373324s May 15 00:01:59.297: INFO: Pod "busybox-readonly-false-ba746887-3833-4c34-9c9b-7c5b1364aa02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03197574s May 15 00:01:59.297: INFO: Pod "busybox-readonly-false-ba746887-3833-4c34-9c9b-7c5b1364aa02" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:01:59.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8280" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":650,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:01:59.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:15.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-421" for this suite. • [SLOW TEST:16.612 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":45,"skipped":652,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:15.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 15 00:02:16.004: INFO: namespace kubectl-6384 May 15 00:02:16.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6384' May 15 00:02:16.329: INFO: stderr: "" May 15 00:02:16.329: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 15 00:02:17.333: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:02:17.333: INFO: Found 0 / 1 May 15 00:02:18.344: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:02:18.344: INFO: Found 0 / 1 May 15 00:02:19.333: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:02:19.333: INFO: Found 0 / 1 May 15 00:02:20.338: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:02:20.338: INFO: Found 1 / 1 May 15 00:02:20.338: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 15 00:02:20.342: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:02:20.342: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 00:02:20.342: INFO: wait on agnhost-master startup in kubectl-6384 May 15 00:02:20.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-jmjg8 agnhost-master --namespace=kubectl-6384' May 15 00:02:20.487: INFO: stderr: "" May 15 00:02:20.487: INFO: stdout: "Paused\n" STEP: exposing RC May 15 00:02:20.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6384' May 15 00:02:20.637: INFO: stderr: "" May 15 00:02:20.637: INFO: stdout: "service/rm2 exposed\n" May 15 00:02:20.650: INFO: Service rm2 in namespace kubectl-6384 found. STEP: exposing service May 15 00:02:22.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6384' May 15 00:02:22.830: INFO: stderr: "" May 15 00:02:22.830: INFO: stdout: "service/rm3 exposed\n" May 15 00:02:22.835: INFO: Service rm3 in namespace kubectl-6384 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:24.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6384" for this suite. • [SLOW TEST:8.938 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":46,"skipped":666,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:24.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:24.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1903" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":47,"skipped":667,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:24.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:25.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8736" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":48,"skipped":674,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:25.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-550362d7-d640-4dd3-bef1-63a3d96e415a STEP: Creating a pod to test consume secrets May 15 00:02:25.131: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5" in namespace "projected-2821" to be "Succeeded or Failed" May 15 00:02:25.143: INFO: Pod "pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.756011ms May 15 00:02:27.398: INFO: Pod "pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267481969s May 15 00:02:29.402: INFO: Pod "pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270953088s STEP: Saw pod success May 15 00:02:29.402: INFO: Pod "pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5" satisfied condition "Succeeded or Failed" May 15 00:02:29.404: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5 container secret-volume-test: STEP: delete the pod May 15 00:02:29.668: INFO: Waiting for pod pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5 to disappear May 15 00:02:29.671: INFO: Pod pod-projected-secrets-392e1f25-4286-4290-bede-96019a9663b5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:29.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2821" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":49,"skipped":688,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:29.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 15 00:02:29.752: INFO: Waiting up to 5m0s for pod "client-containers-16652068-595e-45ba-9ad8-d8a95f293272" in namespace "containers-5876" to be "Succeeded or Failed" May 15 00:02:29.762: INFO: Pod "client-containers-16652068-595e-45ba-9ad8-d8a95f293272": Phase="Pending", Reason="", readiness=false. Elapsed: 9.602759ms May 15 00:02:31.985: INFO: Pod "client-containers-16652068-595e-45ba-9ad8-d8a95f293272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233239386s May 15 00:02:33.989: INFO: Pod "client-containers-16652068-595e-45ba-9ad8-d8a95f293272": Phase="Running", Reason="", readiness=true. Elapsed: 4.237283961s May 15 00:02:35.994: INFO: Pod "client-containers-16652068-595e-45ba-9ad8-d8a95f293272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.241794228s STEP: Saw pod success May 15 00:02:35.994: INFO: Pod "client-containers-16652068-595e-45ba-9ad8-d8a95f293272" satisfied condition "Succeeded or Failed" May 15 00:02:35.997: INFO: Trying to get logs from node latest-worker2 pod client-containers-16652068-595e-45ba-9ad8-d8a95f293272 container test-container: STEP: delete the pod May 15 00:02:36.027: INFO: Waiting for pod client-containers-16652068-595e-45ba-9ad8-d8a95f293272 to disappear May 15 00:02:36.040: INFO: Pod client-containers-16652068-595e-45ba-9ad8-d8a95f293272 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:36.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5876" for this suite. • [SLOW TEST:6.369 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":50,"skipped":691,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:36.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 15 00:02:36.812: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 15 00:02:38.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097756, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097756, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097756, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725097756, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:02:41.868: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:02:41.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:43.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9761" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.135 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":51,"skipped":693,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:43.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:02:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-622" for this suite. • [SLOW TEST:16.273 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":52,"skipped":712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:02:59.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:03:03.590: INFO: Waiting up to 5m0s for pod "client-envvars-65d78774-f29a-4622-8bfa-4180108365c1" in namespace "pods-8370" to be "Succeeded or Failed" May 15 00:03:03.611: INFO: Pod "client-envvars-65d78774-f29a-4622-8bfa-4180108365c1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.025239ms May 15 00:03:05.615: INFO: Pod "client-envvars-65d78774-f29a-4622-8bfa-4180108365c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025584542s May 15 00:03:07.620: INFO: Pod "client-envvars-65d78774-f29a-4622-8bfa-4180108365c1": Phase="Running", Reason="", readiness=true. Elapsed: 4.030126633s May 15 00:03:09.624: INFO: Pod "client-envvars-65d78774-f29a-4622-8bfa-4180108365c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034249394s STEP: Saw pod success May 15 00:03:09.624: INFO: Pod "client-envvars-65d78774-f29a-4622-8bfa-4180108365c1" satisfied condition "Succeeded or Failed" May 15 00:03:09.627: INFO: Trying to get logs from node latest-worker pod client-envvars-65d78774-f29a-4622-8bfa-4180108365c1 container env3cont: STEP: delete the pod May 15 00:03:09.664: INFO: Waiting for pod client-envvars-65d78774-f29a-4622-8bfa-4180108365c1 to disappear May 15 00:03:09.698: INFO: Pod client-envvars-65d78774-f29a-4622-8bfa-4180108365c1 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:03:09.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8370" for this suite. • [SLOW TEST:10.250 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:03:09.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1608.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1608.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1608.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1608.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1608.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1608.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 00:03:15.908: INFO: DNS probes using dns-1608/dns-test-7d0c27dc-4201-4554-a4ba-bac97f772985 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:03:16.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1608" for this suite. • [SLOW TEST:7.201 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":54,"skipped":768,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:03:16.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-4z44 STEP: Creating a pod to test atomic-volume-subpath May 15 00:03:17.162: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4z44" in namespace "subpath-4209" to be "Succeeded or Failed" May 15 00:03:17.171: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Pending", Reason="", readiness=false. Elapsed: 9.243931ms May 15 00:03:19.474: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312356158s May 15 00:03:21.483: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321473471s May 15 00:03:23.487: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 6.324979407s May 15 00:03:25.491: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 8.329696899s May 15 00:03:27.494: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 10.332697212s May 15 00:03:29.499: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 12.337114431s May 15 00:03:31.503: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 14.341286295s May 15 00:03:33.507: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 16.344898468s May 15 00:03:35.511: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 18.349295103s May 15 00:03:37.515: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 20.353207207s May 15 00:03:39.520: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 22.357800901s May 15 00:03:41.602: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Running", Reason="", readiness=true. Elapsed: 24.440665391s May 15 00:03:43.606: INFO: Pod "pod-subpath-test-configmap-4z44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.444397904s STEP: Saw pod success May 15 00:03:43.606: INFO: Pod "pod-subpath-test-configmap-4z44" satisfied condition "Succeeded or Failed" May 15 00:03:43.609: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-4z44 container test-container-subpath-configmap-4z44: STEP: delete the pod May 15 00:03:43.669: INFO: Waiting for pod pod-subpath-test-configmap-4z44 to disappear May 15 00:03:43.672: INFO: Pod pod-subpath-test-configmap-4z44 no longer exists STEP: Deleting pod pod-subpath-test-configmap-4z44 May 15 00:03:43.672: INFO: Deleting pod "pod-subpath-test-configmap-4z44" in namespace "subpath-4209" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:03:43.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4209" for this suite. • [SLOW TEST:26.776 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":55,"skipped":779,"failed":0} S ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:03:43.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 15 00:03:43.759: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 15 00:03:43.799: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 15 00:03:43.799: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 15 00:03:43.812: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 15 00:03:43.812: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 15 00:03:43.866: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 15 00:03:43.866: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 15 00:03:51.198: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:03:51.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-7137" for this suite. • [SLOW TEST:7.551 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":56,"skipped":780,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:03:51.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 15 00:04:01.881: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:01.943: INFO: Pod pod-with-poststart-http-hook still exists May 15 00:04:03.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:03.948: INFO: Pod pod-with-poststart-http-hook still exists May 15 00:04:05.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:05.948: INFO: Pod pod-with-poststart-http-hook still exists May 15 00:04:07.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:07.947: INFO: Pod pod-with-poststart-http-hook still exists May 15 00:04:09.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:09.948: INFO: Pod pod-with-poststart-http-hook still exists May 15 00:04:11.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:11.947: INFO: Pod pod-with-poststart-http-hook still exists May 15 00:04:13.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:13.947: INFO: Pod pod-with-poststart-http-hook still exists May 15 00:04:15.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 00:04:15.948: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:04:15.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1631" for this suite. • [SLOW TEST:24.722 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":57,"skipped":781,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:04:15.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 15 00:04:16.032: INFO: Waiting up to 5m0s for pod "pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0" in namespace "emptydir-4774" to be "Succeeded or Failed" May 15 00:04:16.038: INFO: Pod "pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.682615ms May 15 00:04:18.043: INFO: Pod "pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010183836s May 15 00:04:20.047: INFO: Pod "pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015142253s STEP: Saw pod success May 15 00:04:20.048: INFO: Pod "pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0" satisfied condition "Succeeded or Failed" May 15 00:04:20.051: INFO: Trying to get logs from node latest-worker pod pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0 container test-container: STEP: delete the pod May 15 00:04:20.087: INFO: Waiting for pod pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0 to disappear May 15 00:04:20.092: INFO: Pod pod-1ededad9-3cdc-45bb-ba19-3cd2a81897c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:04:20.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4774" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:04:20.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:04:24.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8492" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:04:24.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-ebafb666-1ec5-4411-a0ef-7b250a860834 in namespace container-probe-569 May 15 00:04:28.394: INFO: Started pod test-webserver-ebafb666-1ec5-4411-a0ef-7b250a860834 in namespace container-probe-569 STEP: checking the pod's current state and verifying that restartCount is present May 15 00:04:28.396: INFO: Initial restart count of pod test-webserver-ebafb666-1ec5-4411-a0ef-7b250a860834 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:08:28.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-569" for this suite. • [SLOW TEST:244.676 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:08:28.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:08:40.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-711" for this suite. • [SLOW TEST:11.662 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":61,"skipped":918,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:08:40.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-b68b7ad2-cd19-48c9-b4d0-94647678a26b STEP: Creating a pod to test consume configMaps May 15 00:08:40.734: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b" in namespace "projected-1560" to be "Succeeded or Failed" May 15 00:08:40.755: INFO: Pod "pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.210957ms May 15 00:08:42.759: INFO: Pod "pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024636539s May 15 00:08:44.791: INFO: Pod "pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056095395s STEP: Saw pod success May 15 00:08:44.791: INFO: Pod "pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b" satisfied condition "Succeeded or Failed" May 15 00:08:44.794: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b container projected-configmap-volume-test: STEP: delete the pod May 15 00:08:44.856: INFO: Waiting for pod pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b to disappear May 15 00:08:44.870: INFO: Pod pod-projected-configmaps-f5aa07a9-2acb-4fe6-926a-d222ed8d4e9b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:08:44.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1560" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":62,"skipped":931,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:08:44.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:08:45.727: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:08:47.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098125, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098125, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098125, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098125, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:08:50.773: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:03.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4255" for this suite. STEP: Destroying namespace "webhook-4255-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.256 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":63,"skipped":935,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:03.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:09:04.132: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:09:06.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098144, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098144, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098144, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098144, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:09:09.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:09.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1750" for this suite. STEP: Destroying namespace "webhook-1750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.484 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":64,"skipped":962,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:09.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-46ff88d3-2d57-4fe4-8d43-3d81e0470cf0 STEP: Creating a pod to test consume secrets May 15 00:09:09.766: INFO: Waiting up to 5m0s for pod "pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34" in namespace "secrets-2060" to be "Succeeded or Failed" May 15 00:09:09.771: INFO: Pod "pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176565ms May 15 00:09:11.775: INFO: Pod "pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009131277s May 15 00:09:13.780: INFO: Pod "pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013330041s STEP: Saw pod success May 15 00:09:13.780: INFO: Pod "pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34" satisfied condition "Succeeded or Failed" May 15 00:09:13.783: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34 container secret-volume-test: STEP: delete the pod May 15 00:09:13.895: INFO: Waiting for pod pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34 to disappear May 15 00:09:13.908: INFO: Pod pod-secrets-95879816-0a0f-42d5-bf0d-d7de84cc9e34 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:13.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2060" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:13.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-5676 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5676 to expose endpoints map[] May 15 00:09:14.074: INFO: Get endpoints failed (39.883911ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 15 00:09:15.078: INFO: successfully validated that service multi-endpoint-test in namespace services-5676 exposes endpoints map[] (1.044254771s elapsed) STEP: Creating pod pod1 in namespace services-5676 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5676 to expose endpoints map[pod1:[100]] May 15 00:09:19.122: INFO: successfully validated that service multi-endpoint-test in namespace services-5676 exposes endpoints map[pod1:[100]] (4.03545025s elapsed) STEP: Creating pod pod2 in namespace services-5676 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5676 to expose endpoints map[pod1:[100] pod2:[101]] May 15 00:09:23.413: INFO: successfully validated that service multi-endpoint-test in namespace services-5676 exposes endpoints map[pod1:[100] pod2:[101]] (4.257196122s elapsed) STEP: Deleting pod pod1 in namespace services-5676 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5676 to expose endpoints map[pod2:[101]] May 15 00:09:24.469: INFO: successfully validated that service multi-endpoint-test in namespace services-5676 exposes endpoints map[pod2:[101]] (1.050422629s elapsed) STEP: Deleting pod pod2 in namespace services-5676 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5676 to expose endpoints map[] May 15 00:09:25.487: INFO: successfully validated that service multi-endpoint-test in namespace services-5676 exposes endpoints map[] (1.012952888s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:25.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5676" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.917 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":66,"skipped":1008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:25.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 00:09:30.064: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:30.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5498" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":67,"skipped":1070,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:30.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:46.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3778" for this suite. • [SLOW TEST:16.783 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":68,"skipped":1084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:46.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-375563b0-8442-4485-b54d-19438c616ad5 STEP: Creating a pod to test consume configMaps May 15 00:09:47.122: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57" in namespace "projected-3625" to be "Succeeded or Failed" May 15 00:09:47.144: INFO: Pod "pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57": Phase="Pending", Reason="", readiness=false. Elapsed: 22.187376ms May 15 00:09:49.168: INFO: Pod "pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046784717s May 15 00:09:51.172: INFO: Pod "pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050621066s STEP: Saw pod success May 15 00:09:51.172: INFO: Pod "pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57" satisfied condition "Succeeded or Failed" May 15 00:09:51.175: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57 container projected-configmap-volume-test: STEP: delete the pod May 15 00:09:51.248: INFO: Waiting for pod pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57 to disappear May 15 00:09:51.252: INFO: Pod pod-projected-configmaps-2fade3c8-224e-4db5-9380-b9fbd4848d57 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:51.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3625" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1112,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:51.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-b037e2ac-dcea-4799-947e-76200ef15a84 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b037e2ac-dcea-4799-947e-76200ef15a84 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:09:59.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8015" for this suite. • [SLOW TEST:8.304 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":1113,"failed":0} S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:09:59.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 15 00:10:11.797: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:11.797: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:11.828338 7 log.go:172] (0xc001b20840) (0xc001f1a640) Create stream I0515 00:10:11.828376 7 log.go:172] (0xc001b20840) (0xc001f1a640) Stream added, broadcasting: 1 I0515 00:10:11.830105 7 log.go:172] (0xc001b20840) Reply frame received for 1 I0515 00:10:11.830162 7 log.go:172] (0xc001b20840) (0xc001f1a6e0) Create stream I0515 00:10:11.830171 7 log.go:172] (0xc001b20840) (0xc001f1a6e0) Stream added, broadcasting: 3 I0515 00:10:11.831190 7 log.go:172] (0xc001b20840) Reply frame received for 3 I0515 00:10:11.831234 7 log.go:172] (0xc001b20840) (0xc0020cbd60) Create stream I0515 00:10:11.831249 7 log.go:172] (0xc001b20840) (0xc0020cbd60) Stream added, broadcasting: 5 I0515 00:10:11.832385 7 log.go:172] (0xc001b20840) Reply frame received for 5 I0515 00:10:11.906954 7 log.go:172] (0xc001b20840) Data frame received for 3 I0515 00:10:11.906989 7 log.go:172] (0xc001f1a6e0) (3) Data frame handling I0515 00:10:11.906996 7 log.go:172] (0xc001f1a6e0) (3) Data frame sent I0515 00:10:11.907004 7 log.go:172] (0xc001b20840) Data frame received for 3 I0515 00:10:11.907013 7 log.go:172] (0xc001f1a6e0) (3) Data frame handling I0515 00:10:11.907032 7 log.go:172] (0xc001b20840) Data frame received for 5 I0515 00:10:11.907039 7 log.go:172] (0xc0020cbd60) (5) Data frame handling I0515 00:10:11.908606 7 log.go:172] (0xc001b20840) Data frame received for 1 I0515 00:10:11.908641 7 log.go:172] (0xc001f1a640) (1) Data frame handling I0515 00:10:11.908665 7 log.go:172] (0xc001f1a640) (1) Data frame sent I0515 00:10:11.908686 7 log.go:172] (0xc001b20840) (0xc001f1a640) Stream removed, broadcasting: 1 I0515 00:10:11.908710 7 log.go:172] (0xc001b20840) Go away received I0515 00:10:11.908820 7 log.go:172] (0xc001b20840) (0xc001f1a640) Stream removed, broadcasting: 1 I0515 00:10:11.908845 7 log.go:172] (0xc001b20840) (0xc001f1a6e0) Stream removed, broadcasting: 3 I0515 00:10:11.908860 7 log.go:172] (0xc001b20840) (0xc0020cbd60) Stream removed, broadcasting: 5 May 15 00:10:11.908: INFO: Exec stderr: "" May 15 00:10:11.908: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:11.908: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:11.944694 7 log.go:172] (0xc001b20f20) (0xc001f1a960) Create stream I0515 00:10:11.944726 7 log.go:172] (0xc001b20f20) (0xc001f1a960) Stream added, broadcasting: 1 I0515 00:10:11.946522 7 log.go:172] (0xc001b20f20) Reply frame received for 1 I0515 00:10:11.946571 7 log.go:172] (0xc001b20f20) (0xc0022088c0) Create stream I0515 00:10:11.946584 7 log.go:172] (0xc001b20f20) (0xc0022088c0) Stream added, broadcasting: 3 I0515 00:10:11.947403 7 log.go:172] (0xc001b20f20) Reply frame received for 3 I0515 00:10:11.947432 7 log.go:172] (0xc001b20f20) (0xc002208a00) Create stream I0515 00:10:11.947442 7 log.go:172] (0xc001b20f20) (0xc002208a00) Stream added, broadcasting: 5 I0515 00:10:11.948135 7 log.go:172] (0xc001b20f20) Reply frame received for 5 I0515 00:10:12.015557 7 log.go:172] (0xc001b20f20) Data frame received for 5 I0515 00:10:12.015599 7 log.go:172] (0xc002208a00) (5) Data frame handling I0515 00:10:12.015635 7 log.go:172] (0xc001b20f20) Data frame received for 3 I0515 00:10:12.015648 7 log.go:172] (0xc0022088c0) (3) Data frame handling I0515 00:10:12.015658 7 log.go:172] (0xc0022088c0) (3) Data frame sent I0515 00:10:12.015669 7 log.go:172] (0xc001b20f20) Data frame received for 3 I0515 00:10:12.015687 7 log.go:172] (0xc0022088c0) (3) Data frame handling I0515 00:10:12.017030 7 log.go:172] (0xc001b20f20) Data frame received for 1 I0515 00:10:12.017046 7 log.go:172] (0xc001f1a960) (1) Data frame handling I0515 00:10:12.017056 7 log.go:172] (0xc001f1a960) (1) Data frame sent I0515 00:10:12.017071 7 log.go:172] (0xc001b20f20) (0xc001f1a960) Stream removed, broadcasting: 1 I0515 00:10:12.017093 7 log.go:172] (0xc001b20f20) Go away received I0515 00:10:12.017283 7 log.go:172] (0xc001b20f20) (0xc001f1a960) Stream removed, broadcasting: 1 I0515 00:10:12.017302 7 log.go:172] (0xc001b20f20) (0xc0022088c0) Stream removed, broadcasting: 3 I0515 00:10:12.017311 7 log.go:172] (0xc001b20f20) (0xc002208a00) Stream removed, broadcasting: 5 May 15 00:10:12.017: INFO: Exec stderr: "" May 15 00:10:12.017: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.017: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.048998 7 log.go:172] (0xc001b21550) (0xc001f1abe0) Create stream I0515 00:10:12.049039 7 log.go:172] (0xc001b21550) (0xc001f1abe0) Stream added, broadcasting: 1 I0515 00:10:12.051501 7 log.go:172] (0xc001b21550) Reply frame received for 1 I0515 00:10:12.051560 7 log.go:172] (0xc001b21550) (0xc001ace000) Create stream I0515 00:10:12.051589 7 log.go:172] (0xc001b21550) (0xc001ace000) Stream added, broadcasting: 3 I0515 00:10:12.052665 7 log.go:172] (0xc001b21550) Reply frame received for 3 I0515 00:10:12.052717 7 log.go:172] (0xc001b21550) (0xc001f1ac80) Create stream I0515 00:10:12.052735 7 log.go:172] (0xc001b21550) (0xc001f1ac80) Stream added, broadcasting: 5 I0515 00:10:12.054082 7 log.go:172] (0xc001b21550) Reply frame received for 5 I0515 00:10:12.108660 7 log.go:172] (0xc001b21550) Data frame received for 5 I0515 00:10:12.108702 7 log.go:172] (0xc001f1ac80) (5) Data frame handling I0515 00:10:12.108727 7 log.go:172] (0xc001b21550) Data frame received for 3 I0515 00:10:12.108742 7 log.go:172] (0xc001ace000) (3) Data frame handling I0515 00:10:12.108761 7 log.go:172] (0xc001ace000) (3) Data frame sent I0515 00:10:12.108781 7 log.go:172] (0xc001b21550) Data frame received for 3 I0515 00:10:12.108791 7 log.go:172] (0xc001ace000) (3) Data frame handling I0515 00:10:12.110731 7 log.go:172] (0xc001b21550) Data frame received for 1 I0515 00:10:12.110751 7 log.go:172] (0xc001f1abe0) (1) Data frame handling I0515 00:10:12.110767 7 log.go:172] (0xc001f1abe0) (1) Data frame sent I0515 00:10:12.110785 7 log.go:172] (0xc001b21550) (0xc001f1abe0) Stream removed, broadcasting: 1 I0515 00:10:12.110813 7 log.go:172] (0xc001b21550) Go away received I0515 00:10:12.111002 7 log.go:172] (0xc001b21550) (0xc001f1abe0) Stream removed, broadcasting: 1 I0515 00:10:12.111029 7 log.go:172] (0xc001b21550) (0xc001ace000) Stream removed, broadcasting: 3 I0515 00:10:12.111044 7 log.go:172] (0xc001b21550) (0xc001f1ac80) Stream removed, broadcasting: 5 May 15 00:10:12.111: INFO: Exec stderr: "" May 15 00:10:12.111: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.111: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.137081 7 log.go:172] (0xc0032ec370) (0xc001ace460) Create stream I0515 00:10:12.137286 7 log.go:172] (0xc0032ec370) (0xc001ace460) Stream added, broadcasting: 1 I0515 00:10:12.139183 7 log.go:172] (0xc0032ec370) Reply frame received for 1 I0515 00:10:12.139210 7 log.go:172] (0xc0032ec370) (0xc001f1ae60) Create stream I0515 00:10:12.139220 7 log.go:172] (0xc0032ec370) (0xc001f1ae60) Stream added, broadcasting: 3 I0515 00:10:12.140024 7 log.go:172] (0xc0032ec370) Reply frame received for 3 I0515 00:10:12.140068 7 log.go:172] (0xc0032ec370) (0xc0020cbe00) Create stream I0515 00:10:12.140087 7 log.go:172] (0xc0032ec370) (0xc0020cbe00) Stream added, broadcasting: 5 I0515 00:10:12.141262 7 log.go:172] (0xc0032ec370) Reply frame received for 5 I0515 00:10:12.217658 7 log.go:172] (0xc0032ec370) Data frame received for 5 I0515 00:10:12.217690 7 log.go:172] (0xc0020cbe00) (5) Data frame handling I0515 00:10:12.217720 7 log.go:172] (0xc0032ec370) Data frame received for 3 I0515 00:10:12.217752 7 log.go:172] (0xc001f1ae60) (3) Data frame handling I0515 00:10:12.217773 7 log.go:172] (0xc001f1ae60) (3) Data frame sent I0515 00:10:12.217787 7 log.go:172] (0xc0032ec370) Data frame received for 3 I0515 00:10:12.217796 7 log.go:172] (0xc001f1ae60) (3) Data frame handling I0515 00:10:12.218966 7 log.go:172] (0xc0032ec370) Data frame received for 1 I0515 00:10:12.218989 7 log.go:172] (0xc001ace460) (1) Data frame handling I0515 00:10:12.219018 7 log.go:172] (0xc001ace460) (1) Data frame sent I0515 00:10:12.219037 7 log.go:172] (0xc0032ec370) (0xc001ace460) Stream removed, broadcasting: 1 I0515 00:10:12.219049 7 log.go:172] (0xc0032ec370) Go away received I0515 00:10:12.219194 7 log.go:172] (0xc0032ec370) (0xc001ace460) Stream removed, broadcasting: 1 I0515 00:10:12.219219 7 log.go:172] (0xc0032ec370) (0xc001f1ae60) Stream removed, broadcasting: 3 I0515 00:10:12.219229 7 log.go:172] (0xc0032ec370) (0xc0020cbe00) Stream removed, broadcasting: 5 May 15 00:10:12.219: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 15 00:10:12.219: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.219: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.247835 7 log.go:172] (0xc0032ec9a0) (0xc001ace640) Create stream I0515 00:10:12.247871 7 log.go:172] (0xc0032ec9a0) (0xc001ace640) Stream added, broadcasting: 1 I0515 00:10:12.250145 7 log.go:172] (0xc0032ec9a0) Reply frame received for 1 I0515 00:10:12.250186 7 log.go:172] (0xc0032ec9a0) (0xc001ace8c0) Create stream I0515 00:10:12.250205 7 log.go:172] (0xc0032ec9a0) (0xc001ace8c0) Stream added, broadcasting: 3 I0515 00:10:12.251370 7 log.go:172] (0xc0032ec9a0) Reply frame received for 3 I0515 00:10:12.251398 7 log.go:172] (0xc0032ec9a0) (0xc001acea00) Create stream I0515 00:10:12.251413 7 log.go:172] (0xc0032ec9a0) (0xc001acea00) Stream added, broadcasting: 5 I0515 00:10:12.252327 7 log.go:172] (0xc0032ec9a0) Reply frame received for 5 I0515 00:10:12.322334 7 log.go:172] (0xc0032ec9a0) Data frame received for 5 I0515 00:10:12.322411 7 log.go:172] (0xc001acea00) (5) Data frame handling I0515 00:10:12.322466 7 log.go:172] (0xc0032ec9a0) Data frame received for 3 I0515 00:10:12.322526 7 log.go:172] (0xc001ace8c0) (3) Data frame handling I0515 00:10:12.322598 7 log.go:172] (0xc001ace8c0) (3) Data frame sent I0515 00:10:12.322619 7 log.go:172] (0xc0032ec9a0) Data frame received for 3 I0515 00:10:12.322628 7 log.go:172] (0xc001ace8c0) (3) Data frame handling I0515 00:10:12.323849 7 log.go:172] (0xc0032ec9a0) Data frame received for 1 I0515 00:10:12.323894 7 log.go:172] (0xc001ace640) (1) Data frame handling I0515 00:10:12.323934 7 log.go:172] (0xc001ace640) (1) Data frame sent I0515 00:10:12.323949 7 log.go:172] (0xc0032ec9a0) (0xc001ace640) Stream removed, broadcasting: 1 I0515 00:10:12.323960 7 log.go:172] (0xc0032ec9a0) Go away received I0515 00:10:12.324123 7 log.go:172] (0xc0032ec9a0) (0xc001ace640) Stream removed, broadcasting: 1 I0515 00:10:12.324145 7 log.go:172] (0xc0032ec9a0) (0xc001ace8c0) Stream removed, broadcasting: 3 I0515 00:10:12.324158 7 log.go:172] (0xc0032ec9a0) (0xc001acea00) Stream removed, broadcasting: 5 May 15 00:10:12.324: INFO: Exec stderr: "" May 15 00:10:12.324: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.324: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.350741 7 log.go:172] (0xc00343a0b0) (0xc001a5a1e0) Create stream I0515 00:10:12.350770 7 log.go:172] (0xc00343a0b0) (0xc001a5a1e0) Stream added, broadcasting: 1 I0515 00:10:12.352732 7 log.go:172] (0xc00343a0b0) Reply frame received for 1 I0515 00:10:12.352770 7 log.go:172] (0xc00343a0b0) (0xc002208be0) Create stream I0515 00:10:12.352792 7 log.go:172] (0xc00343a0b0) (0xc002208be0) Stream added, broadcasting: 3 I0515 00:10:12.353897 7 log.go:172] (0xc00343a0b0) Reply frame received for 3 I0515 00:10:12.353927 7 log.go:172] (0xc00343a0b0) (0xc002308aa0) Create stream I0515 00:10:12.353942 7 log.go:172] (0xc00343a0b0) (0xc002308aa0) Stream added, broadcasting: 5 I0515 00:10:12.354794 7 log.go:172] (0xc00343a0b0) Reply frame received for 5 I0515 00:10:12.415661 7 log.go:172] (0xc00343a0b0) Data frame received for 3 I0515 00:10:12.415694 7 log.go:172] (0xc002208be0) (3) Data frame handling I0515 00:10:12.415704 7 log.go:172] (0xc002208be0) (3) Data frame sent I0515 00:10:12.415711 7 log.go:172] (0xc00343a0b0) Data frame received for 3 I0515 00:10:12.415717 7 log.go:172] (0xc002208be0) (3) Data frame handling I0515 00:10:12.416060 7 log.go:172] (0xc00343a0b0) Data frame received for 5 I0515 00:10:12.416080 7 log.go:172] (0xc002308aa0) (5) Data frame handling I0515 00:10:12.416720 7 log.go:172] (0xc00343a0b0) Data frame received for 1 I0515 00:10:12.416735 7 log.go:172] (0xc001a5a1e0) (1) Data frame handling I0515 00:10:12.416752 7 log.go:172] (0xc001a5a1e0) (1) Data frame sent I0515 00:10:12.416781 7 log.go:172] (0xc00343a0b0) (0xc001a5a1e0) Stream removed, broadcasting: 1 I0515 00:10:12.416836 7 log.go:172] (0xc00343a0b0) (0xc001a5a1e0) Stream removed, broadcasting: 1 I0515 00:10:12.416849 7 log.go:172] (0xc00343a0b0) (0xc002208be0) Stream removed, broadcasting: 3 I0515 00:10:12.416961 7 log.go:172] (0xc00343a0b0) (0xc002308aa0) Stream removed, broadcasting: 5 May 15 00:10:12.417: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 15 00:10:12.417: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.417: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.417477 7 log.go:172] (0xc00343a0b0) Go away received I0515 00:10:12.468493 7 log.go:172] (0xc00343a6e0) (0xc001a5a780) Create stream I0515 00:10:12.468535 7 log.go:172] (0xc00343a6e0) (0xc001a5a780) Stream added, broadcasting: 1 I0515 00:10:12.485056 7 log.go:172] (0xc00343a6e0) Reply frame received for 1 I0515 00:10:12.485095 7 log.go:172] (0xc00343a6e0) (0xc001a5a820) Create stream I0515 00:10:12.485104 7 log.go:172] (0xc00343a6e0) (0xc001a5a820) Stream added, broadcasting: 3 I0515 00:10:12.486150 7 log.go:172] (0xc00343a6e0) Reply frame received for 3 I0515 00:10:12.486172 7 log.go:172] (0xc00343a6e0) (0xc002208c80) Create stream I0515 00:10:12.486180 7 log.go:172] (0xc00343a6e0) (0xc002208c80) Stream added, broadcasting: 5 I0515 00:10:12.486801 7 log.go:172] (0xc00343a6e0) Reply frame received for 5 I0515 00:10:12.551798 7 log.go:172] (0xc00343a6e0) Data frame received for 5 I0515 00:10:12.551827 7 log.go:172] (0xc002208c80) (5) Data frame handling I0515 00:10:12.551870 7 log.go:172] (0xc00343a6e0) Data frame received for 3 I0515 00:10:12.551912 7 log.go:172] (0xc001a5a820) (3) Data frame handling I0515 00:10:12.551937 7 log.go:172] (0xc001a5a820) (3) Data frame sent I0515 00:10:12.551990 7 log.go:172] (0xc00343a6e0) Data frame received for 3 I0515 00:10:12.552009 7 log.go:172] (0xc001a5a820) (3) Data frame handling I0515 00:10:12.553928 7 log.go:172] (0xc00343a6e0) Data frame received for 1 I0515 00:10:12.553955 7 log.go:172] (0xc001a5a780) (1) Data frame handling I0515 00:10:12.553985 7 log.go:172] (0xc001a5a780) (1) Data frame sent I0515 00:10:12.554002 7 log.go:172] (0xc00343a6e0) (0xc001a5a780) Stream removed, broadcasting: 1 I0515 00:10:12.554018 7 log.go:172] (0xc00343a6e0) Go away received I0515 00:10:12.554134 7 log.go:172] (0xc00343a6e0) (0xc001a5a780) Stream removed, broadcasting: 1 I0515 00:10:12.554169 7 log.go:172] (0xc00343a6e0) (0xc001a5a820) Stream removed, broadcasting: 3 I0515 00:10:12.554183 7 log.go:172] (0xc00343a6e0) (0xc002208c80) Stream removed, broadcasting: 5 May 15 00:10:12.554: INFO: Exec stderr: "" May 15 00:10:12.554: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.554: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.603637 7 log.go:172] (0xc001b21b80) (0xc001f1b0e0) Create stream I0515 00:10:12.603679 7 log.go:172] (0xc001b21b80) (0xc001f1b0e0) Stream added, broadcasting: 1 I0515 00:10:12.606313 7 log.go:172] (0xc001b21b80) Reply frame received for 1 I0515 00:10:12.606342 7 log.go:172] (0xc001b21b80) (0xc001a5a8c0) Create stream I0515 00:10:12.606357 7 log.go:172] (0xc001b21b80) (0xc001a5a8c0) Stream added, broadcasting: 3 I0515 00:10:12.607156 7 log.go:172] (0xc001b21b80) Reply frame received for 3 I0515 00:10:12.607189 7 log.go:172] (0xc001b21b80) (0xc001a5a960) Create stream I0515 00:10:12.607213 7 log.go:172] (0xc001b21b80) (0xc001a5a960) Stream added, broadcasting: 5 I0515 00:10:12.607975 7 log.go:172] (0xc001b21b80) Reply frame received for 5 I0515 00:10:12.669039 7 log.go:172] (0xc001b21b80) Data frame received for 5 I0515 00:10:12.669072 7 log.go:172] (0xc001a5a960) (5) Data frame handling I0515 00:10:12.669100 7 log.go:172] (0xc001b21b80) Data frame received for 3 I0515 00:10:12.669377 7 log.go:172] (0xc001a5a8c0) (3) Data frame handling I0515 00:10:12.669401 7 log.go:172] (0xc001a5a8c0) (3) Data frame sent I0515 00:10:12.669414 7 log.go:172] (0xc001b21b80) Data frame received for 3 I0515 00:10:12.669423 7 log.go:172] (0xc001a5a8c0) (3) Data frame handling I0515 00:10:12.670450 7 log.go:172] (0xc001b21b80) Data frame received for 1 I0515 00:10:12.670471 7 log.go:172] (0xc001f1b0e0) (1) Data frame handling I0515 00:10:12.670480 7 log.go:172] (0xc001f1b0e0) (1) Data frame sent I0515 00:10:12.670489 7 log.go:172] (0xc001b21b80) (0xc001f1b0e0) Stream removed, broadcasting: 1 I0515 00:10:12.670503 7 log.go:172] (0xc001b21b80) Go away received I0515 00:10:12.670694 7 log.go:172] (0xc001b21b80) (0xc001f1b0e0) Stream removed, broadcasting: 1 I0515 00:10:12.670709 7 log.go:172] (0xc001b21b80) (0xc001a5a8c0) Stream removed, broadcasting: 3 I0515 00:10:12.670725 7 log.go:172] (0xc001b21b80) (0xc001a5a960) Stream removed, broadcasting: 5 May 15 00:10:12.670: INFO: Exec stderr: "" May 15 00:10:12.670: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.670: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.694678 7 log.go:172] (0xc003562370) (0xc002308dc0) Create stream I0515 00:10:12.694713 7 log.go:172] (0xc003562370) (0xc002308dc0) Stream added, broadcasting: 1 I0515 00:10:12.697071 7 log.go:172] (0xc003562370) Reply frame received for 1 I0515 00:10:12.697383 7 log.go:172] (0xc003562370) (0xc001f1b220) Create stream I0515 00:10:12.697407 7 log.go:172] (0xc003562370) (0xc001f1b220) Stream added, broadcasting: 3 I0515 00:10:12.698359 7 log.go:172] (0xc003562370) Reply frame received for 3 I0515 00:10:12.698392 7 log.go:172] (0xc003562370) (0xc002208d20) Create stream I0515 00:10:12.698405 7 log.go:172] (0xc003562370) (0xc002208d20) Stream added, broadcasting: 5 I0515 00:10:12.699146 7 log.go:172] (0xc003562370) Reply frame received for 5 I0515 00:10:12.750065 7 log.go:172] (0xc003562370) Data frame received for 5 I0515 00:10:12.750108 7 log.go:172] (0xc002208d20) (5) Data frame handling I0515 00:10:12.750132 7 log.go:172] (0xc003562370) Data frame received for 3 I0515 00:10:12.750160 7 log.go:172] (0xc001f1b220) (3) Data frame handling I0515 00:10:12.750202 7 log.go:172] (0xc001f1b220) (3) Data frame sent I0515 00:10:12.750217 7 log.go:172] (0xc003562370) Data frame received for 3 I0515 00:10:12.750236 7 log.go:172] (0xc001f1b220) (3) Data frame handling I0515 00:10:12.751431 7 log.go:172] (0xc003562370) Data frame received for 1 I0515 00:10:12.751461 7 log.go:172] (0xc002308dc0) (1) Data frame handling I0515 00:10:12.751482 7 log.go:172] (0xc002308dc0) (1) Data frame sent I0515 00:10:12.751500 7 log.go:172] (0xc003562370) (0xc002308dc0) Stream removed, broadcasting: 1 I0515 00:10:12.751520 7 log.go:172] (0xc003562370) Go away received I0515 00:10:12.751664 7 log.go:172] (0xc003562370) (0xc002308dc0) Stream removed, broadcasting: 1 I0515 00:10:12.751696 7 log.go:172] (0xc003562370) (0xc001f1b220) Stream removed, broadcasting: 3 I0515 00:10:12.751731 7 log.go:172] (0xc003562370) (0xc002208d20) Stream removed, broadcasting: 5 May 15 00:10:12.751: INFO: Exec stderr: "" May 15 00:10:12.751: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8705 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:10:12.751: INFO: >>> kubeConfig: /root/.kube/config I0515 00:10:12.783703 7 log.go:172] (0xc002c25130) (0xc002208fa0) Create stream I0515 00:10:12.783740 7 log.go:172] (0xc002c25130) (0xc002208fa0) Stream added, broadcasting: 1 I0515 00:10:12.786871 7 log.go:172] (0xc002c25130) Reply frame received for 1 I0515 00:10:12.786951 7 log.go:172] (0xc002c25130) (0xc002209040) Create stream I0515 00:10:12.787003 7 log.go:172] (0xc002c25130) (0xc002209040) Stream added, broadcasting: 3 I0515 00:10:12.790117 7 log.go:172] (0xc002c25130) Reply frame received for 3 I0515 00:10:12.790173 7 log.go:172] (0xc002c25130) (0xc0019c2000) Create stream I0515 00:10:12.790196 7 log.go:172] (0xc002c25130) (0xc0019c2000) Stream added, broadcasting: 5 I0515 00:10:12.791535 7 log.go:172] (0xc002c25130) Reply frame received for 5 I0515 00:10:12.866733 7 log.go:172] (0xc002c25130) Data frame received for 3 I0515 00:10:12.866761 7 log.go:172] (0xc002209040) (3) Data frame handling I0515 00:10:12.866777 7 log.go:172] (0xc002209040) (3) Data frame sent I0515 00:10:12.866928 7 log.go:172] (0xc002c25130) Data frame received for 3 I0515 00:10:12.866950 7 log.go:172] (0xc002209040) (3) Data frame handling I0515 00:10:12.866982 7 log.go:172] (0xc002c25130) Data frame received for 5 I0515 00:10:12.867014 7 log.go:172] (0xc0019c2000) (5) Data frame handling I0515 00:10:12.867878 7 log.go:172] (0xc002c25130) Data frame received for 1 I0515 00:10:12.867896 7 log.go:172] (0xc002208fa0) (1) Data frame handling I0515 00:10:12.867911 7 log.go:172] (0xc002208fa0) (1) Data frame sent I0515 00:10:12.867930 7 log.go:172] (0xc002c25130) (0xc002208fa0) Stream removed, broadcasting: 1 I0515 00:10:12.867958 7 log.go:172] (0xc002c25130) Go away received I0515 00:10:12.868004 7 log.go:172] (0xc002c25130) (0xc002208fa0) Stream removed, broadcasting: 1 I0515 00:10:12.868020 7 log.go:172] (0xc002c25130) (0xc002209040) Stream removed, broadcasting: 3 I0515 00:10:12.868033 7 log.go:172] (0xc002c25130) (0xc0019c2000) Stream removed, broadcasting: 5 May 15 00:10:12.868: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:12.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8705" for this suite. • [SLOW TEST:13.310 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1114,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:12.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0515 00:10:14.026713 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 00:10:14.026: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:14.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7614" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":72,"skipped":1114,"failed":0} ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:14.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 15 00:10:14.262: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3796 /api/v1/namespaces/watch-3796/configmaps/e2e-watch-test-watch-closed e09f4f62-6507-4fc2-abcf-b118b55f951c 4670737 0 2020-05-15 00:10:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-15 00:10:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:10:14.262: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3796 /api/v1/namespaces/watch-3796/configmaps/e2e-watch-test-watch-closed e09f4f62-6507-4fc2-abcf-b118b55f951c 4670738 0 2020-05-15 00:10:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-15 00:10:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 15 00:10:14.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3796 /api/v1/namespaces/watch-3796/configmaps/e2e-watch-test-watch-closed e09f4f62-6507-4fc2-abcf-b118b55f951c 4670739 0 2020-05-15 00:10:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-15 00:10:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:10:14.310: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3796 /api/v1/namespaces/watch-3796/configmaps/e2e-watch-test-watch-closed e09f4f62-6507-4fc2-abcf-b118b55f951c 4670740 0 2020-05-15 00:10:14 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-15 00:10:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:14.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3796" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":73,"skipped":1114,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:14.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 15 00:10:14.440: INFO: Waiting up to 5m0s for pod "pod-197b402b-49a2-400f-b6ed-86b9ed90cee4" in namespace "emptydir-1032" to be "Succeeded or Failed" May 15 00:10:14.444: INFO: Pod "pod-197b402b-49a2-400f-b6ed-86b9ed90cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.950312ms May 15 00:10:16.448: INFO: Pod "pod-197b402b-49a2-400f-b6ed-86b9ed90cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008349643s May 15 00:10:18.451: INFO: Pod "pod-197b402b-49a2-400f-b6ed-86b9ed90cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011512927s May 15 00:10:20.456: INFO: Pod "pod-197b402b-49a2-400f-b6ed-86b9ed90cee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015638893s STEP: Saw pod success May 15 00:10:20.456: INFO: Pod "pod-197b402b-49a2-400f-b6ed-86b9ed90cee4" satisfied condition "Succeeded or Failed" May 15 00:10:20.459: INFO: Trying to get logs from node latest-worker2 pod pod-197b402b-49a2-400f-b6ed-86b9ed90cee4 container test-container: STEP: delete the pod May 15 00:10:20.506: INFO: Waiting for pod pod-197b402b-49a2-400f-b6ed-86b9ed90cee4 to disappear May 15 00:10:20.523: INFO: Pod pod-197b402b-49a2-400f-b6ed-86b9ed90cee4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:20.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1032" for this suite. • [SLOW TEST:6.168 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":74,"skipped":1114,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:20.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:10:20.592: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:26.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2997" for this suite. • [SLOW TEST:6.342 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":75,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:26.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:10:26.945: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 15 00:10:29.008: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:30.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1867" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":76,"skipped":1138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:30.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-eb3c833f-96f3-4b08-a8e9-d542029e3448 STEP: Creating a pod to test consume secrets May 15 00:10:30.686: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd" in namespace "projected-2904" to be "Succeeded or Failed" May 15 00:10:30.858: INFO: Pod "pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd": Phase="Pending", Reason="", readiness=false. Elapsed: 172.131167ms May 15 00:10:32.862: INFO: Pod "pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176020515s May 15 00:10:34.866: INFO: Pod "pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179479642s STEP: Saw pod success May 15 00:10:34.866: INFO: Pod "pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd" satisfied condition "Succeeded or Failed" May 15 00:10:34.869: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd container projected-secret-volume-test: STEP: delete the pod May 15 00:10:34.902: INFO: Waiting for pod pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd to disappear May 15 00:10:34.913: INFO: Pod pod-projected-secrets-bc5da292-89cd-4ac8-9a8c-e1b1e8c568dd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:34.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2904" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1189,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:34.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:35.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1128" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":78,"skipped":1190,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:35.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 15 00:10:35.510: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 15 00:10:35.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172' May 15 00:10:42.157: INFO: stderr: "" May 15 00:10:42.157: INFO: stdout: "service/agnhost-slave created\n" May 15 00:10:42.158: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 15 00:10:42.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172' May 15 00:10:42.485: INFO: stderr: "" May 15 00:10:42.485: INFO: stdout: "service/agnhost-master created\n" May 15 00:10:42.486: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 15 00:10:42.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172' May 15 00:10:42.773: INFO: stderr: "" May 15 00:10:42.773: INFO: stdout: "service/frontend created\n" May 15 00:10:42.773: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 15 00:10:42.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172' May 15 00:10:43.032: INFO: stderr: "" May 15 00:10:43.032: INFO: stdout: "deployment.apps/frontend created\n" May 15 00:10:43.032: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 15 00:10:43.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172' May 15 00:10:43.318: INFO: stderr: "" May 15 00:10:43.318: INFO: stdout: "deployment.apps/agnhost-master created\n" May 15 00:10:43.318: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 15 00:10:43.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172' May 15 00:10:43.656: INFO: stderr: "" May 15 00:10:43.656: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 15 00:10:43.656: INFO: Waiting for all frontend pods to be Running. May 15 00:10:53.707: INFO: Waiting for frontend to serve content. May 15 00:10:53.714: INFO: Trying to add a new entry to the guestbook. May 15 00:10:53.723: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 15 00:10:53.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172' May 15 00:10:53.920: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 00:10:53.920: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 15 00:10:53.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172' May 15 00:10:54.099: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 00:10:54.099: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 15 00:10:54.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172' May 15 00:10:54.268: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 00:10:54.268: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 15 00:10:54.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172' May 15 00:10:54.398: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 00:10:54.398: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 15 00:10:54.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172' May 15 00:10:55.039: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 00:10:55.039: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 15 00:10:55.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172' May 15 00:10:55.594: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 00:10:55.594: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:10:55.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3172" for this suite. • [SLOW TEST:20.793 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":79,"skipped":1206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:10:56.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 15 00:10:57.739: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9313 /api/v1/namespaces/watch-9313/configmaps/e2e-watch-test-label-changed 0ae6daf2-84b7-4af6-9d2d-ba69e3427384 4671260 0 2020-05-15 00:10:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-15 00:10:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:10:57.739: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9313 /api/v1/namespaces/watch-9313/configmaps/e2e-watch-test-label-changed 0ae6daf2-84b7-4af6-9d2d-ba69e3427384 4671261 0 2020-05-15 00:10:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-15 00:10:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:10:57.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9313 /api/v1/namespaces/watch-9313/configmaps/e2e-watch-test-label-changed 0ae6daf2-84b7-4af6-9d2d-ba69e3427384 4671265 0 2020-05-15 00:10:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-15 00:10:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 15 00:11:07.902: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9313 /api/v1/namespaces/watch-9313/configmaps/e2e-watch-test-label-changed 0ae6daf2-84b7-4af6-9d2d-ba69e3427384 4671368 0 2020-05-15 00:10:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:11:07.902: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9313 /api/v1/namespaces/watch-9313/configmaps/e2e-watch-test-label-changed 0ae6daf2-84b7-4af6-9d2d-ba69e3427384 4671370 0 2020-05-15 00:10:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:11:07.902: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9313 /api/v1/namespaces/watch-9313/configmaps/e2e-watch-test-label-changed 0ae6daf2-84b7-4af6-9d2d-ba69e3427384 4671371 0 2020-05-15 00:10:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:11:07.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9313" for this suite. • [SLOW TEST:11.695 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":80,"skipped":1236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:11:07.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:11:08.027: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 15 00:11:13.030: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 00:11:13.030: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 15 00:11:15.034: INFO: Creating deployment "test-rollover-deployment" May 15 00:11:15.056: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 15 00:11:17.062: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 15 00:11:17.068: INFO: Ensure that both replica sets have 1 created replica May 15 00:11:17.075: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 15 00:11:17.083: INFO: Updating deployment test-rollover-deployment May 15 00:11:17.083: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 15 00:11:19.138: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 15 00:11:19.146: INFO: Make sure deployment "test-rollover-deployment" is complete May 15 00:11:19.151: INFO: all replica sets need to contain the pod-template-hash label May 15 00:11:19.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098277, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:11:21.159: INFO: all replica sets need to contain the pod-template-hash label May 15 00:11:21.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098280, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:11:23.158: INFO: all replica sets need to contain the pod-template-hash label May 15 00:11:23.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098280, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:11:25.160: INFO: all replica sets need to contain the pod-template-hash label May 15 00:11:25.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098280, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:11:27.159: INFO: all replica sets need to contain the pod-template-hash label May 15 00:11:27.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098280, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:11:29.160: INFO: all replica sets need to contain the pod-template-hash label May 15 00:11:29.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098280, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098275, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:11:31.198: INFO: May 15 00:11:31.198: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 15 00:11:31.206: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-178 /apis/apps/v1/namespaces/deployment-178/deployments/test-rollover-deployment 39cbe0a3-707e-4af7-bf60-7f72705ee92e 4671535 2 2020-05-15 00:11:15 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-15 00:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 00:11:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00382e6f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-15 00:11:15 +0000 UTC,LastTransitionTime:2020-05-15 00:11:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-15 00:11:30 +0000 UTC,LastTransitionTime:2020-05-15 00:11:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 15 00:11:31.210: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-178 /apis/apps/v1/namespaces/deployment-178/replicasets/test-rollover-deployment-7c4fd9c879 25682dcf-e529-45ec-aade-53e28902134e 4671523 2 2020-05-15 00:11:17 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 39cbe0a3-707e-4af7-bf60-7f72705ee92e 0xc00382ed27 0xc00382ed28}] [] [{kube-controller-manager Update apps/v1 2020-05-15 00:11:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39cbe0a3-707e-4af7-bf60-7f72705ee92e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00382edb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 15 00:11:31.210: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 15 00:11:31.210: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-178 /apis/apps/v1/namespaces/deployment-178/replicasets/test-rollover-controller ec5a8df9-f0f9-4f62-b134-f9a3a65b46aa 4671533 2 2020-05-15 00:11:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 39cbe0a3-707e-4af7-bf60-7f72705ee92e 0xc00382eb17 0xc00382eb18}] [] [{e2e.test Update apps/v1 2020-05-15 00:11:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 00:11:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39cbe0a3-707e-4af7-bf60-7f72705ee92e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00382ebb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 00:11:31.210: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-178 /apis/apps/v1/namespaces/deployment-178/replicasets/test-rollover-deployment-5686c4cfd5 928d28ba-1d62-4da4-8c40-374186714e33 4671464 2 2020-05-15 00:11:15 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 39cbe0a3-707e-4af7-bf60-7f72705ee92e 0xc00382ec27 0xc00382ec28}] [] [{kube-controller-manager Update apps/v1 2020-05-15 00:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39cbe0a3-707e-4af7-bf60-7f72705ee92e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00382ecb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 00:11:31.214: INFO: Pod "test-rollover-deployment-7c4fd9c879-8pqh2" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-8pqh2 test-rollover-deployment-7c4fd9c879- deployment-178 /api/v1/namespaces/deployment-178/pods/test-rollover-deployment-7c4fd9c879-8pqh2 968ccfdf-9901-46e6-a366-ad8793d9ed8f 4671484 0 2020-05-15 00:11:17 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 25682dcf-e529-45ec-aade-53e28902134e 0xc00382f377 0xc00382f378}] [] [{kube-controller-manager Update v1 2020-05-15 00:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25682dcf-e529-45ec-aade-53e28902134e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-15 00:11:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.118\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x5s7r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x5s7r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x5s7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:11:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:11:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.118,StartTime:2020-05-15 00:11:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 00:11:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://7e29727420d3cb94e5499b1fb3cc311bd86027c496e97efefe94698cf29e9f35,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:11:31.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-178" for this suite. • [SLOW TEST:23.288 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":81,"skipped":1282,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:11:31.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:11:31.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6" in namespace "projected-6343" to be "Succeeded or Failed" May 15 00:11:31.358: INFO: Pod "downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.706088ms May 15 00:11:33.361: INFO: Pod "downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021416799s May 15 00:11:35.366: INFO: Pod "downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025871736s STEP: Saw pod success May 15 00:11:35.366: INFO: Pod "downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6" satisfied condition "Succeeded or Failed" May 15 00:11:35.368: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6 container client-container: STEP: delete the pod May 15 00:11:35.404: INFO: Waiting for pod downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6 to disappear May 15 00:11:35.438: INFO: Pod downwardapi-volume-e85204ba-9387-498b-91d1-d7c662e8ddb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:11:35.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6343" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":82,"skipped":1289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:11:35.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 15 00:11:35.621: INFO: Waiting up to 5m0s for pod "pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1" in namespace "emptydir-9166" to be "Succeeded or Failed" May 15 00:11:35.660: INFO: Pod "pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.006149ms May 15 00:11:37.816: INFO: Pod "pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195586592s May 15 00:11:39.821: INFO: Pod "pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200312175s STEP: Saw pod success May 15 00:11:39.821: INFO: Pod "pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1" satisfied condition "Succeeded or Failed" May 15 00:11:39.824: INFO: Trying to get logs from node latest-worker2 pod pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1 container test-container: STEP: delete the pod May 15 00:11:39.863: INFO: Waiting for pod pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1 to disappear May 15 00:11:39.875: INFO: Pod pod-f9a154d7-44c4-4ce3-b259-b16a5cdecdc1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:11:39.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9166" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":83,"skipped":1317,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:11:39.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 15 00:11:39.941: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671639 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:11:39.942: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671639 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 15 00:11:49.949: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671695 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:11:49.950: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671695 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 15 00:11:59.956: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671730 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:11:59.957: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671730 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 15 00:12:09.962: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671769 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:12:09.962: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-a 0af4118a-88e7-422c-899a-63a442317db4 4671769 0 2020-05-15 00:11:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-15 00:11:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 15 00:12:19.970: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-b 89f182c5-d9eb-422c-b3ae-8ef52089e218 4671806 0 2020-05-15 00:12:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-15 00:12:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:12:19.970: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-b 89f182c5-d9eb-422c-b3ae-8ef52089e218 4671806 0 2020-05-15 00:12:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-15 00:12:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 15 00:12:29.978: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-b 89f182c5-d9eb-422c-b3ae-8ef52089e218 4671841 0 2020-05-15 00:12:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-15 00:12:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 15 00:12:29.978: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3120 /api/v1/namespaces/watch-3120/configmaps/e2e-watch-test-configmap-b 89f182c5-d9eb-422c-b3ae-8ef52089e218 4671841 0 2020-05-15 00:12:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-15 00:12:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:12:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3120" for this suite. • [SLOW TEST:60.104 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":84,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:12:39.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 15 00:12:46.134: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5101 PodName:pod-sharedvolume-85025c78-b07e-4837-adf9-cd150c8b0cc3 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:12:46.134: INFO: >>> kubeConfig: /root/.kube/config I0515 00:12:46.161279 7 log.go:172] (0xc002700000) (0xc0019c3860) Create stream I0515 00:12:46.161301 7 log.go:172] (0xc002700000) (0xc0019c3860) Stream added, broadcasting: 1 I0515 00:12:46.162801 7 log.go:172] (0xc002700000) Reply frame received for 1 I0515 00:12:46.162833 7 log.go:172] (0xc002700000) (0xc002308140) Create stream I0515 00:12:46.162844 7 log.go:172] (0xc002700000) (0xc002308140) Stream added, broadcasting: 3 I0515 00:12:46.163576 7 log.go:172] (0xc002700000) Reply frame received for 3 I0515 00:12:46.163597 7 log.go:172] (0xc002700000) (0xc0014c7680) Create stream I0515 00:12:46.163606 7 log.go:172] (0xc002700000) (0xc0014c7680) Stream added, broadcasting: 5 I0515 00:12:46.164363 7 log.go:172] (0xc002700000) Reply frame received for 5 I0515 00:12:46.255655 7 log.go:172] (0xc002700000) Data frame received for 3 I0515 00:12:46.255674 7 log.go:172] (0xc002308140) (3) Data frame handling I0515 00:12:46.255684 7 log.go:172] (0xc002308140) (3) Data frame sent I0515 00:12:46.255692 7 log.go:172] (0xc002700000) Data frame received for 3 I0515 00:12:46.255699 7 log.go:172] (0xc002308140) (3) Data frame handling I0515 00:12:46.255735 7 log.go:172] (0xc002700000) Data frame received for 5 I0515 00:12:46.255761 7 log.go:172] (0xc0014c7680) (5) Data frame handling I0515 00:12:46.257365 7 log.go:172] (0xc002700000) Data frame received for 1 I0515 00:12:46.257379 7 log.go:172] (0xc0019c3860) (1) Data frame handling I0515 00:12:46.257390 7 log.go:172] (0xc0019c3860) (1) Data frame sent I0515 00:12:46.257587 7 log.go:172] (0xc002700000) (0xc0019c3860) Stream removed, broadcasting: 1 I0515 00:12:46.257637 7 log.go:172] (0xc002700000) (0xc0019c3860) Stream removed, broadcasting: 1 I0515 00:12:46.257649 7 log.go:172] (0xc002700000) (0xc002308140) Stream removed, broadcasting: 3 I0515 00:12:46.257657 7 log.go:172] (0xc002700000) (0xc0014c7680) Stream removed, broadcasting: 5 May 15 00:12:46.257: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:12:46.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0515 00:12:46.257733 7 log.go:172] (0xc002700000) Go away received STEP: Destroying namespace "emptydir-5101" for this suite. • [SLOW TEST:6.277 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":85,"skipped":1340,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:12:46.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:12:46.370: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab" in namespace "projected-198" to be "Succeeded or Failed" May 15 00:12:46.373: INFO: Pod "downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.905394ms May 15 00:12:48.378: INFO: Pod "downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00859644s May 15 00:12:50.383: INFO: Pod "downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013012459s STEP: Saw pod success May 15 00:12:50.383: INFO: Pod "downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab" satisfied condition "Succeeded or Failed" May 15 00:12:50.386: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab container client-container: STEP: delete the pod May 15 00:12:50.644: INFO: Waiting for pod downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab to disappear May 15 00:12:50.805: INFO: Pod downwardapi-volume-48528343-f1cf-42b4-b7c7-f078bda19eab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:12:50.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-198" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:12:50.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 15 00:12:50.953: INFO: >>> kubeConfig: /root/.kube/config May 15 00:12:53.915: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:13:04.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1188" for this suite. • [SLOW TEST:13.742 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":87,"skipped":1381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:13:04.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8969, will wait for the garbage collector to delete the pods May 15 00:13:10.747: INFO: Deleting Job.batch foo took: 5.675926ms May 15 00:13:11.048: INFO: Terminating Job.batch foo pods took: 300.175628ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:13:54.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8969" for this suite. • [SLOW TEST:50.391 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":88,"skipped":1430,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:13:54.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:13:55.734: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:13:57.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098435, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098435, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098435, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098435, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:14:00.776: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:14:00.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6997" for this suite. STEP: Destroying namespace "webhook-6997-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":89,"skipped":1439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:14:01.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-cb244c75-baf1-4134-8736-e69714d7e7a0 STEP: Creating a pod to test consume configMaps May 15 00:14:01.102: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2" in namespace "configmap-8220" to be "Succeeded or Failed" May 15 00:14:01.161: INFO: Pod "pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 58.892775ms May 15 00:14:03.170: INFO: Pod "pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068442792s May 15 00:14:05.315: INFO: Pod "pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212854782s May 15 00:14:07.319: INFO: Pod "pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.217430388s STEP: Saw pod success May 15 00:14:07.319: INFO: Pod "pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2" satisfied condition "Succeeded or Failed" May 15 00:14:07.323: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2 container configmap-volume-test: STEP: delete the pod May 15 00:14:07.368: INFO: Waiting for pod pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2 to disappear May 15 00:14:07.376: INFO: Pod pod-configmaps-8f57bd4a-0f1c-49e9-a388-d4b26959aeb2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:14:07.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8220" for this suite. • [SLOW TEST:6.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1528,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:14:07.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-398a7b23-84b5-4fe1-8727-3568fc870089 STEP: Creating a pod to test consume secrets May 15 00:14:07.540: INFO: Waiting up to 5m0s for pod "pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277" in namespace "secrets-1250" to be "Succeeded or Failed" May 15 00:14:07.578: INFO: Pod "pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277": Phase="Pending", Reason="", readiness=false. Elapsed: 37.622685ms May 15 00:14:09.582: INFO: Pod "pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041583862s May 15 00:14:11.586: INFO: Pod "pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04537601s STEP: Saw pod success May 15 00:14:11.586: INFO: Pod "pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277" satisfied condition "Succeeded or Failed" May 15 00:14:11.588: INFO: Trying to get logs from node latest-worker pod pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277 container secret-volume-test: STEP: delete the pod May 15 00:14:11.807: INFO: Waiting for pod pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277 to disappear May 15 00:14:11.814: INFO: Pod pod-secrets-4c7c8460-47fe-4bb9-85ef-98dc5cc53277 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:14:11.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1250" for this suite. STEP: Destroying namespace "secret-namespace-725" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1532,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:14:11.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-aa1ab886-c886-4922-88bd-60ccc46577ff STEP: Creating a pod to test consume configMaps May 15 00:14:11.960: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd" in namespace "projected-7936" to be "Succeeded or Failed" May 15 00:14:11.991: INFO: Pod "pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.321718ms May 15 00:14:14.069: INFO: Pod "pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10923368s May 15 00:14:16.099: INFO: Pod "pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139785467s STEP: Saw pod success May 15 00:14:16.100: INFO: Pod "pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd" satisfied condition "Succeeded or Failed" May 15 00:14:16.104: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd container projected-configmap-volume-test: STEP: delete the pod May 15 00:14:16.146: INFO: Waiting for pod pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd to disappear May 15 00:14:16.179: INFO: Pod pod-projected-configmaps-f00943e1-f7c9-4ccc-8141-b4e59d7654cd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:14:16.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7936" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1541,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:14:16.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:14:33.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9644" for this suite. • [SLOW TEST:17.326 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":93,"skipped":1561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:14:33.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:16:33.596: INFO: Deleting pod "var-expansion-c7552067-23af-4549-94db-bf1b7b512697" in namespace "var-expansion-7620" May 15 00:16:33.602: INFO: Wait up to 5m0s for pod "var-expansion-c7552067-23af-4549-94db-bf1b7b512697" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:16:35.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7620" for this suite. • [SLOW TEST:122.137 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":94,"skipped":1595,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:16:35.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 15 00:16:35.831: INFO: Waiting up to 5m0s for pod "downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4" in namespace "downward-api-4137" to be "Succeeded or Failed" May 15 00:16:35.909: INFO: Pod "downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4": Phase="Pending", Reason="", readiness=false. Elapsed: 78.181152ms May 15 00:16:38.148: INFO: Pod "downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317096242s May 15 00:16:40.152: INFO: Pod "downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.320398878s STEP: Saw pod success May 15 00:16:40.152: INFO: Pod "downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4" satisfied condition "Succeeded or Failed" May 15 00:16:40.154: INFO: Trying to get logs from node latest-worker pod downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4 container dapi-container: STEP: delete the pod May 15 00:16:40.184: INFO: Waiting for pod downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4 to disappear May 15 00:16:40.188: INFO: Pod downward-api-4c797bf6-e8df-42f2-849a-326a3f8376e4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:16:40.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4137" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":95,"skipped":1608,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:16:40.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:16:40.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17" in namespace "downward-api-8519" to be "Succeeded or Failed" May 15 00:16:40.303: INFO: Pod "downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17": Phase="Pending", Reason="", readiness=false. Elapsed: 1.926274ms May 15 00:16:42.448: INFO: Pod "downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14693692s May 15 00:16:44.451: INFO: Pod "downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17": Phase="Running", Reason="", readiness=true. Elapsed: 4.150405297s May 15 00:16:46.456: INFO: Pod "downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155272873s STEP: Saw pod success May 15 00:16:46.456: INFO: Pod "downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17" satisfied condition "Succeeded or Failed" May 15 00:16:46.460: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17 container client-container: STEP: delete the pod May 15 00:16:46.510: INFO: Waiting for pod downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17 to disappear May 15 00:16:46.525: INFO: Pod downwardapi-volume-18da2180-1fb0-4572-ac87-227dbcd72d17 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:16:46.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8519" for this suite. • [SLOW TEST:6.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1616,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:16:46.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 15 00:16:54.685: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 00:16:54.716: INFO: Pod pod-with-prestop-exec-hook still exists May 15 00:16:56.716: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 00:16:56.720: INFO: Pod pod-with-prestop-exec-hook still exists May 15 00:16:58.716: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 00:16:58.721: INFO: Pod pod-with-prestop-exec-hook still exists May 15 00:17:00.716: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 00:17:00.720: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:00.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3873" for this suite. • [SLOW TEST:14.202 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1631,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:00.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 15 00:17:05.368: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8844 pod-service-account-0b54ddfe-2058-4313-af09-3ac816c1cebd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 15 00:17:05.550: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8844 pod-service-account-0b54ddfe-2058-4313-af09-3ac816c1cebd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 15 00:17:05.740: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8844 pod-service-account-0b54ddfe-2058-4313-af09-3ac816c1cebd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:05.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8844" for this suite. • [SLOW TEST:5.188 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":98,"skipped":1645,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:05.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:10.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4355" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1660,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:10.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 15 00:17:10.152: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:18.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6002" for this suite. • [SLOW TEST:8.660 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":100,"skipped":1680,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:18.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:17:19.484: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:17:21.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:17:23.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098639, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:17:26.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:27.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6122" for this suite. STEP: Destroying namespace "webhook-6122-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.963 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":101,"skipped":1685,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:27.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 15 00:17:27.757: INFO: Waiting up to 5m0s for pod "var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24" in namespace "var-expansion-8079" to be "Succeeded or Failed" May 15 00:17:27.786: INFO: Pod "var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24": Phase="Pending", Reason="", readiness=false. Elapsed: 29.191714ms May 15 00:17:29.790: INFO: Pod "var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033494193s May 15 00:17:31.807: INFO: Pod "var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04987485s STEP: Saw pod success May 15 00:17:31.807: INFO: Pod "var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24" satisfied condition "Succeeded or Failed" May 15 00:17:31.809: INFO: Trying to get logs from node latest-worker2 pod var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24 container dapi-container: STEP: delete the pod May 15 00:17:31.850: INFO: Waiting for pod var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24 to disappear May 15 00:17:31.862: INFO: Pod var-expansion-44f877e9-4065-4671-83b4-ec582c2f0c24 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:31.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8079" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1688,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:31.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:17:32.951: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:17:35.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098652, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098652, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098653, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098652, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:17:37.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098652, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098652, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098653, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098652, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:17:40.193: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:40.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6413" for this suite. STEP: Destroying namespace "webhook-6413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.203 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":103,"skipped":1688,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:41.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 15 00:17:41.209: INFO: Waiting up to 5m0s for pod "pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954" in namespace "emptydir-6364" to be "Succeeded or Failed" May 15 00:17:41.228: INFO: Pod "pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954": Phase="Pending", Reason="", readiness=false. Elapsed: 18.423947ms May 15 00:17:43.323: INFO: Pod "pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113260665s May 15 00:17:45.396: INFO: Pod "pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186810839s STEP: Saw pod success May 15 00:17:45.396: INFO: Pod "pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954" satisfied condition "Succeeded or Failed" May 15 00:17:45.435: INFO: Trying to get logs from node latest-worker2 pod pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954 container test-container: STEP: delete the pod May 15 00:17:45.464: INFO: Waiting for pod pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954 to disappear May 15 00:17:45.492: INFO: Pod pod-8dc3a77a-6fea-4f4d-87f0-c0b35a68c954 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:17:45.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6364" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1705,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:17:45.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-339 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 00:17:45.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 15 00:17:45.876: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 15 00:17:47.878: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 15 00:17:49.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:17:51.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:17:53.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:17:55.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:17:57.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:17:59.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:18:01.933: INFO: The status of Pod netserver-0 is Running (Ready = true) May 15 00:18:01.939: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 15 00:18:06.014: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.131:8080/dial?request=hostname&protocol=http&host=10.244.1.130&port=8080&tries=1'] Namespace:pod-network-test-339 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:18:06.014: INFO: >>> kubeConfig: /root/.kube/config I0515 00:18:06.042616 7 log.go:172] (0xc0039644d0) (0xc000e001e0) Create stream I0515 00:18:06.042646 7 log.go:172] (0xc0039644d0) (0xc000e001e0) Stream added, broadcasting: 1 I0515 00:18:06.044473 7 log.go:172] (0xc0039644d0) Reply frame received for 1 I0515 00:18:06.044508 7 log.go:172] (0xc0039644d0) (0xc001acf040) Create stream I0515 00:18:06.044519 7 log.go:172] (0xc0039644d0) (0xc001acf040) Stream added, broadcasting: 3 I0515 00:18:06.045646 7 log.go:172] (0xc0039644d0) Reply frame received for 3 I0515 00:18:06.045679 7 log.go:172] (0xc0039644d0) (0xc000e003c0) Create stream I0515 00:18:06.045693 7 log.go:172] (0xc0039644d0) (0xc000e003c0) Stream added, broadcasting: 5 I0515 00:18:06.046495 7 log.go:172] (0xc0039644d0) Reply frame received for 5 I0515 00:18:06.151838 7 log.go:172] (0xc0039644d0) Data frame received for 3 I0515 00:18:06.151870 7 log.go:172] (0xc001acf040) (3) Data frame handling I0515 00:18:06.151889 7 log.go:172] (0xc001acf040) (3) Data frame sent I0515 00:18:06.152394 7 log.go:172] (0xc0039644d0) Data frame received for 5 I0515 00:18:06.152438 7 log.go:172] (0xc0039644d0) Data frame received for 3 I0515 00:18:06.152512 7 log.go:172] (0xc001acf040) (3) Data frame handling I0515 00:18:06.152556 7 log.go:172] (0xc000e003c0) (5) Data frame handling I0515 00:18:06.154346 7 log.go:172] (0xc0039644d0) Data frame received for 1 I0515 00:18:06.154363 7 log.go:172] (0xc000e001e0) (1) Data frame handling I0515 00:18:06.154371 7 log.go:172] (0xc000e001e0) (1) Data frame sent I0515 00:18:06.154658 7 log.go:172] (0xc0039644d0) (0xc000e001e0) Stream removed, broadcasting: 1 I0515 00:18:06.154686 7 log.go:172] (0xc0039644d0) Go away received I0515 00:18:06.154788 7 log.go:172] (0xc0039644d0) (0xc000e001e0) Stream removed, broadcasting: 1 I0515 00:18:06.154809 7 log.go:172] (0xc0039644d0) (0xc001acf040) Stream removed, broadcasting: 3 I0515 00:18:06.154818 7 log.go:172] (0xc0039644d0) (0xc000e003c0) Stream removed, broadcasting: 5 May 15 00:18:06.154: INFO: Waiting for responses: map[] May 15 00:18:06.158: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.131:8080/dial?request=hostname&protocol=http&host=10.244.2.167&port=8080&tries=1'] Namespace:pod-network-test-339 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:18:06.158: INFO: >>> kubeConfig: /root/.kube/config I0515 00:18:06.190297 7 log.go:172] (0xc003bbe420) (0xc001fe48c0) Create stream I0515 00:18:06.190327 7 log.go:172] (0xc003bbe420) (0xc001fe48c0) Stream added, broadcasting: 1 I0515 00:18:06.192639 7 log.go:172] (0xc003bbe420) Reply frame received for 1 I0515 00:18:06.192689 7 log.go:172] (0xc003bbe420) (0xc001a5a000) Create stream I0515 00:18:06.192712 7 log.go:172] (0xc003bbe420) (0xc001a5a000) Stream added, broadcasting: 3 I0515 00:18:06.194274 7 log.go:172] (0xc003bbe420) Reply frame received for 3 I0515 00:18:06.194397 7 log.go:172] (0xc003bbe420) (0xc002208000) Create stream I0515 00:18:06.194428 7 log.go:172] (0xc003bbe420) (0xc002208000) Stream added, broadcasting: 5 I0515 00:18:06.195484 7 log.go:172] (0xc003bbe420) Reply frame received for 5 I0515 00:18:06.266544 7 log.go:172] (0xc003bbe420) Data frame received for 3 I0515 00:18:06.266596 7 log.go:172] (0xc001a5a000) (3) Data frame handling I0515 00:18:06.266633 7 log.go:172] (0xc001a5a000) (3) Data frame sent I0515 00:18:06.266913 7 log.go:172] (0xc003bbe420) Data frame received for 3 I0515 00:18:06.266957 7 log.go:172] (0xc001a5a000) (3) Data frame handling I0515 00:18:06.266989 7 log.go:172] (0xc003bbe420) Data frame received for 5 I0515 00:18:06.266998 7 log.go:172] (0xc002208000) (5) Data frame handling I0515 00:18:06.268986 7 log.go:172] (0xc003bbe420) Data frame received for 1 I0515 00:18:06.269018 7 log.go:172] (0xc001fe48c0) (1) Data frame handling I0515 00:18:06.269049 7 log.go:172] (0xc001fe48c0) (1) Data frame sent I0515 00:18:06.269072 7 log.go:172] (0xc003bbe420) (0xc001fe48c0) Stream removed, broadcasting: 1 I0515 00:18:06.269427 7 log.go:172] (0xc003bbe420) (0xc001fe48c0) Stream removed, broadcasting: 1 I0515 00:18:06.269481 7 log.go:172] (0xc003bbe420) (0xc001a5a000) Stream removed, broadcasting: 3 I0515 00:18:06.269494 7 log.go:172] (0xc003bbe420) (0xc002208000) Stream removed, broadcasting: 5 I0515 00:18:06.269585 7 log.go:172] (0xc003bbe420) Go away received May 15 00:18:06.269: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:06.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-339" for this suite. • [SLOW TEST:20.687 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":105,"skipped":1717,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:06.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:18:06.356: INFO: Creating deployment "test-recreate-deployment" May 15 00:18:06.360: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 15 00:18:06.398: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 15 00:18:08.406: INFO: Waiting deployment "test-recreate-deployment" to complete May 15 00:18:08.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098686, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098686, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098686, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:18:10.414: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 15 00:18:10.424: INFO: Updating deployment test-recreate-deployment May 15 00:18:10.424: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 15 00:18:11.407: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3856 /apis/apps/v1/namespaces/deployment-3856/deployments/test-recreate-deployment 3dddc1b8-390a-4ff7-bd0e-cf33b3aea644 4673920 2 2020-05-15 00:18:06 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-15 00:18:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 00:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00465d398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-15 00:18:11 +0000 UTC,LastTransitionTime:2020-05-15 00:18:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-15 00:18:11 +0000 UTC,LastTransitionTime:2020-05-15 00:18:06 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 15 00:18:11.587: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-3856 /apis/apps/v1/namespaces/deployment-3856/replicasets/test-recreate-deployment-d5667d9c7 dae38f9d-bb5d-47e6-9223-69789e894ac1 4673918 1 2020-05-15 00:18:10 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3dddc1b8-390a-4ff7-bd0e-cf33b3aea644 0xc0032ccd70 0xc0032ccd71}] [] [{kube-controller-manager Update apps/v1 2020-05-15 00:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3dddc1b8-390a-4ff7-bd0e-cf33b3aea644\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032ccde8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 00:18:11.587: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 15 00:18:11.588: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-3856 /apis/apps/v1/namespaces/deployment-3856/replicasets/test-recreate-deployment-6d65b9f6d8 1f73a9a5-1e6f-48af-a746-69765f4b03f9 4673906 2 2020-05-15 00:18:06 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3dddc1b8-390a-4ff7-bd0e-cf33b3aea644 0xc0032ccc77 0xc0032ccc78}] [] [{kube-controller-manager Update apps/v1 2020-05-15 00:18:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3dddc1b8-390a-4ff7-bd0e-cf33b3aea644\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032ccd08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 00:18:11.643: INFO: Pod "test-recreate-deployment-d5667d9c7-5lq9m" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-5lq9m test-recreate-deployment-d5667d9c7- deployment-3856 /api/v1/namespaces/deployment-3856/pods/test-recreate-deployment-d5667d9c7-5lq9m 9e378c9d-3aa4-4cfb-8802-f43b8917e5aa 4673919 0 2020-05-15 00:18:10 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 dae38f9d-bb5d-47e6-9223-69789e894ac1 0xc0032cd2b0 0xc0032cd2b1}] [] [{kube-controller-manager Update v1 2020-05-15 00:18:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dae38f9d-bb5d-47e6-9223-69789e894ac1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-15 00:18:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4t4rx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4t4rx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4t4rx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:18:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:18:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:18:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:18:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-15 00:18:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:11.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3856" for this suite. • [SLOW TEST:5.555 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":106,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:11.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:18:13.147: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:15.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9897" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":107,"skipped":1766,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:15.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 00:18:15.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2887' May 15 00:18:16.091: INFO: stderr: "" May 15 00:18:16.091: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 15 00:18:21.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2887 -o json' May 15 00:18:21.252: INFO: stderr: "" May 15 00:18:21.252: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-15T00:18:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-15T00:18:16Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.132\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-15T00:18:19Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2887\",\n \"resourceVersion\": \"4674026\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2887/pods/e2e-test-httpd-pod\",\n \"uid\": \"ff17042c-9665-45c7-8662-4b1ffedee407\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5ctfc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5ctfc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5ctfc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T00:18:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T00:18:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T00:18:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T00:18:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://9dfe1ee209df0d3f49c1fbf7afa935ad00f954a8443664c49188db57ef362cdf\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-15T00:18:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.132\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.132\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-15T00:18:16Z\"\n }\n}\n" STEP: replace the image in the pod May 15 00:18:21.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2887' May 15 00:18:21.605: INFO: stderr: "" May 15 00:18:21.605: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 15 00:18:21.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2887' May 15 00:18:34.854: INFO: stderr: "" May 15 00:18:34.854: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:34.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2887" for this suite. • [SLOW TEST:19.181 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":108,"skipped":1768,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:34.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-7689/secret-test-7ca7d577-037a-4c15-9969-cdd34d4fab82 STEP: Creating a pod to test consume secrets May 15 00:18:34.949: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6" in namespace "secrets-7689" to be "Succeeded or Failed" May 15 00:18:34.959: INFO: Pod "pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183799ms May 15 00:18:37.005: INFO: Pod "pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055586026s May 15 00:18:39.009: INFO: Pod "pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059267614s STEP: Saw pod success May 15 00:18:39.009: INFO: Pod "pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6" satisfied condition "Succeeded or Failed" May 15 00:18:39.011: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6 container env-test: STEP: delete the pod May 15 00:18:39.045: INFO: Waiting for pod pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6 to disappear May 15 00:18:39.066: INFO: Pod pod-configmaps-f1212771-9373-455d-a1d0-3cb91d5abde6 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:39.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7689" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1789,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:39.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 15 00:18:39.924: INFO: created pod pod-service-account-defaultsa May 15 00:18:39.924: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 15 00:18:39.929: INFO: created pod pod-service-account-mountsa May 15 00:18:39.929: INFO: pod pod-service-account-mountsa service account token volume mount: true May 15 00:18:39.988: INFO: created pod pod-service-account-nomountsa May 15 00:18:39.988: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 15 00:18:40.009: INFO: created pod pod-service-account-defaultsa-mountspec May 15 00:18:40.009: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 15 00:18:40.041: INFO: created pod pod-service-account-mountsa-mountspec May 15 00:18:40.042: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 15 00:18:40.077: INFO: created pod pod-service-account-nomountsa-mountspec May 15 00:18:40.077: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 15 00:18:40.132: INFO: created pod pod-service-account-defaultsa-nomountspec May 15 00:18:40.132: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 15 00:18:40.170: INFO: created pod pod-service-account-mountsa-nomountspec May 15 00:18:40.170: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 15 00:18:40.292: INFO: created pod pod-service-account-nomountsa-nomountspec May 15 00:18:40.292: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:40.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8550" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":110,"skipped":1809,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:40.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:53.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3589" for this suite. • [SLOW TEST:13.541 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":111,"skipped":1814,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:53.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 15 00:18:53.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4438' May 15 00:18:54.749: INFO: stderr: "" May 15 00:18:54.749: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 15 00:18:55.851: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:18:55.851: INFO: Found 0 / 1 May 15 00:18:56.753: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:18:56.753: INFO: Found 0 / 1 May 15 00:18:57.784: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:18:57.784: INFO: Found 0 / 1 May 15 00:18:58.753: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:18:58.753: INFO: Found 1 / 1 May 15 00:18:58.753: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 15 00:18:58.755: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:18:58.755: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 00:18:58.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-gwcq7 --namespace=kubectl-4438 -p {"metadata":{"annotations":{"x":"y"}}}' May 15 00:18:58.848: INFO: stderr: "" May 15 00:18:58.848: INFO: stdout: "pod/agnhost-master-gwcq7 patched\n" STEP: checking annotations May 15 00:18:58.864: INFO: Selector matched 1 pods for map[app:agnhost] May 15 00:18:58.864: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:18:58.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4438" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":112,"skipped":1817,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:18:58.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 15 00:18:59.316: INFO: Waiting up to 5m0s for pod "pod-749c2508-cad6-427b-9f7d-0116d6c9dd53" in namespace "emptydir-6540" to be "Succeeded or Failed" May 15 00:18:59.326: INFO: Pod "pod-749c2508-cad6-427b-9f7d-0116d6c9dd53": Phase="Pending", Reason="", readiness=false. Elapsed: 9.474628ms May 15 00:19:01.330: INFO: Pod "pod-749c2508-cad6-427b-9f7d-0116d6c9dd53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013669826s May 15 00:19:03.334: INFO: Pod "pod-749c2508-cad6-427b-9f7d-0116d6c9dd53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018203266s STEP: Saw pod success May 15 00:19:03.335: INFO: Pod "pod-749c2508-cad6-427b-9f7d-0116d6c9dd53" satisfied condition "Succeeded or Failed" May 15 00:19:03.337: INFO: Trying to get logs from node latest-worker pod pod-749c2508-cad6-427b-9f7d-0116d6c9dd53 container test-container: STEP: delete the pod May 15 00:19:03.397: INFO: Waiting for pod pod-749c2508-cad6-427b-9f7d-0116d6c9dd53 to disappear May 15 00:19:03.403: INFO: Pod pod-749c2508-cad6-427b-9f7d-0116d6c9dd53 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:03.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6540" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1829,"failed":0} S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:03.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb May 15 00:19:03.541: INFO: Pod name my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb: Found 0 pods out of 1 May 15 00:19:08.544: INFO: Pod name my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb: Found 1 pods out of 1 May 15 00:19:08.544: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb" are running May 15 00:19:08.550: INFO: Pod "my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb-nfvmt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:19:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:19:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:19:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:19:03 +0000 UTC Reason: Message:}]) May 15 00:19:08.550: INFO: Trying to dial the pod May 15 00:19:13.561: INFO: Controller my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb: Got expected result from replica 1 [my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb-nfvmt]: "my-hostname-basic-3f52cd26-7288-4982-8f86-08a2dd899afb-nfvmt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:13.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9058" for this suite. • [SLOW TEST:10.161 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":114,"skipped":1830,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:13.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-88a6f80d-42ba-49ee-877b-adc6c2cd7c78 STEP: Creating a pod to test consume configMaps May 15 00:19:13.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83" in namespace "projected-9148" to be "Succeeded or Failed" May 15 00:19:13.697: INFO: Pod "pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83": Phase="Pending", Reason="", readiness=false. Elapsed: 13.01929ms May 15 00:19:15.702: INFO: Pod "pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017373613s May 15 00:19:17.706: INFO: Pod "pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021718842s STEP: Saw pod success May 15 00:19:17.706: INFO: Pod "pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83" satisfied condition "Succeeded or Failed" May 15 00:19:17.710: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83 container projected-configmap-volume-test: STEP: delete the pod May 15 00:19:17.896: INFO: Waiting for pod pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83 to disappear May 15 00:19:17.949: INFO: Pod pod-projected-configmaps-79607bb2-c263-4e89-bed6-f08ff8192a83 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:17.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9148" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:18.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 15 00:19:18.072: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 15 00:19:18.937: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 15 00:19:21.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098758, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098758, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098759, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098758, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:19:23.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098758, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098758, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098759, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725098758, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:19:26.383: INFO: Waited 623.507139ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:27.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1077" for this suite. • [SLOW TEST:9.223 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":116,"skipped":1891,"failed":0} S ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:27.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:27.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4214" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":117,"skipped":1892,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:27.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:19:27.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41" in namespace "projected-9997" to be "Succeeded or Failed" May 15 00:19:27.868: INFO: Pod "downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41": Phase="Pending", Reason="", readiness=false. Elapsed: 185.608713ms May 15 00:19:29.872: INFO: Pod "downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190026975s May 15 00:19:31.916: INFO: Pod "downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234405243s STEP: Saw pod success May 15 00:19:31.917: INFO: Pod "downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41" satisfied condition "Succeeded or Failed" May 15 00:19:31.980: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41 container client-container: STEP: delete the pod May 15 00:19:31.999: INFO: Waiting for pod downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41 to disappear May 15 00:19:32.003: INFO: Pod downwardapi-volume-c662b3c3-ee60-40e3-bb15-6e4aa54afe41 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:32.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9997" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":1892,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:32.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 15 00:19:32.116: INFO: Waiting up to 5m0s for pod "pod-832733a2-6d8e-429a-80d4-34ce22fb7de6" in namespace "emptydir-4946" to be "Succeeded or Failed" May 15 00:19:32.126: INFO: Pod "pod-832733a2-6d8e-429a-80d4-34ce22fb7de6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.994236ms May 15 00:19:34.130: INFO: Pod "pod-832733a2-6d8e-429a-80d4-34ce22fb7de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014287495s May 15 00:19:36.133: INFO: Pod "pod-832733a2-6d8e-429a-80d4-34ce22fb7de6": Phase="Running", Reason="", readiness=true. Elapsed: 4.017427828s May 15 00:19:38.136: INFO: Pod "pod-832733a2-6d8e-429a-80d4-34ce22fb7de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020757845s STEP: Saw pod success May 15 00:19:38.137: INFO: Pod "pod-832733a2-6d8e-429a-80d4-34ce22fb7de6" satisfied condition "Succeeded or Failed" May 15 00:19:38.139: INFO: Trying to get logs from node latest-worker2 pod pod-832733a2-6d8e-429a-80d4-34ce22fb7de6 container test-container: STEP: delete the pod May 15 00:19:38.181: INFO: Waiting for pod pod-832733a2-6d8e-429a-80d4-34ce22fb7de6 to disappear May 15 00:19:38.192: INFO: Pod pod-832733a2-6d8e-429a-80d4-34ce22fb7de6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:38.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4946" for this suite. • [SLOW TEST:6.266 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":119,"skipped":1894,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:38.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0515 00:19:39.573587 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 00:19:39.573: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:39.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5345" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":120,"skipped":1898,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:39.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:19:39.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957" in namespace "downward-api-1024" to be "Succeeded or Failed" May 15 00:19:39.689: INFO: Pod "downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957": Phase="Pending", Reason="", readiness=false. Elapsed: 35.986379ms May 15 00:19:41.951: INFO: Pod "downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298462588s May 15 00:19:43.961: INFO: Pod "downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957": Phase="Running", Reason="", readiness=true. Elapsed: 4.308228172s May 15 00:19:45.965: INFO: Pod "downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.312369941s STEP: Saw pod success May 15 00:19:45.965: INFO: Pod "downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957" satisfied condition "Succeeded or Failed" May 15 00:19:45.968: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957 container client-container: STEP: delete the pod May 15 00:19:46.034: INFO: Waiting for pod downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957 to disappear May 15 00:19:46.061: INFO: Pod downwardapi-volume-e152ded6-f8f5-43b0-9611-c0d507162957 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:19:46.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1024" for this suite. • [SLOW TEST:6.498 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":1903,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:19:46.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4689 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 00:19:46.390: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 15 00:19:46.505: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 15 00:19:48.665: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 15 00:19:50.558: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:19:52.526: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:19:54.510: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:19:56.510: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:19:58.510: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:20:00.510: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 00:20:02.509: INFO: The status of Pod netserver-0 is Running (Ready = true) May 15 00:20:02.515: INFO: The status of Pod netserver-1 is Running (Ready = false) May 15 00:20:04.518: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 15 00:20:08.556: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.180:8080/dial?request=hostname&protocol=udp&host=10.244.1.145&port=8081&tries=1'] Namespace:pod-network-test-4689 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:20:08.556: INFO: >>> kubeConfig: /root/.kube/config I0515 00:20:08.589958 7 log.go:172] (0xc0037ac580) (0xc0008d3220) Create stream I0515 00:20:08.589991 7 log.go:172] (0xc0037ac580) (0xc0008d3220) Stream added, broadcasting: 1 I0515 00:20:08.592402 7 log.go:172] (0xc0037ac580) Reply frame received for 1 I0515 00:20:08.592474 7 log.go:172] (0xc0037ac580) (0xc0003f1220) Create stream I0515 00:20:08.592491 7 log.go:172] (0xc0037ac580) (0xc0003f1220) Stream added, broadcasting: 3 I0515 00:20:08.593807 7 log.go:172] (0xc0037ac580) Reply frame received for 3 I0515 00:20:08.593835 7 log.go:172] (0xc0037ac580) (0xc0008d35e0) Create stream I0515 00:20:08.593848 7 log.go:172] (0xc0037ac580) (0xc0008d35e0) Stream added, broadcasting: 5 I0515 00:20:08.594870 7 log.go:172] (0xc0037ac580) Reply frame received for 5 I0515 00:20:08.684254 7 log.go:172] (0xc0037ac580) Data frame received for 3 I0515 00:20:08.684283 7 log.go:172] (0xc0003f1220) (3) Data frame handling I0515 00:20:08.684298 7 log.go:172] (0xc0003f1220) (3) Data frame sent I0515 00:20:08.684728 7 log.go:172] (0xc0037ac580) Data frame received for 5 I0515 00:20:08.684757 7 log.go:172] (0xc0008d35e0) (5) Data frame handling I0515 00:20:08.684804 7 log.go:172] (0xc0037ac580) Data frame received for 3 I0515 00:20:08.684818 7 log.go:172] (0xc0003f1220) (3) Data frame handling I0515 00:20:08.686473 7 log.go:172] (0xc0037ac580) Data frame received for 1 I0515 00:20:08.686503 7 log.go:172] (0xc0008d3220) (1) Data frame handling I0515 00:20:08.686536 7 log.go:172] (0xc0008d3220) (1) Data frame sent I0515 00:20:08.686554 7 log.go:172] (0xc0037ac580) (0xc0008d3220) Stream removed, broadcasting: 1 I0515 00:20:08.686587 7 log.go:172] (0xc0037ac580) Go away received I0515 00:20:08.686691 7 log.go:172] (0xc0037ac580) (0xc0008d3220) Stream removed, broadcasting: 1 I0515 00:20:08.686713 7 log.go:172] (0xc0037ac580) (0xc0003f1220) Stream removed, broadcasting: 3 I0515 00:20:08.686725 7 log.go:172] (0xc0037ac580) (0xc0008d35e0) Stream removed, broadcasting: 5 May 15 00:20:08.686: INFO: Waiting for responses: map[] May 15 00:20:08.690: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.180:8080/dial?request=hostname&protocol=udp&host=10.244.2.179&port=8081&tries=1'] Namespace:pod-network-test-4689 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:20:08.690: INFO: >>> kubeConfig: /root/.kube/config I0515 00:20:08.720303 7 log.go:172] (0xc0037acb00) (0xc000a44d20) Create stream I0515 00:20:08.720347 7 log.go:172] (0xc0037acb00) (0xc000a44d20) Stream added, broadcasting: 1 I0515 00:20:08.733519 7 log.go:172] (0xc0037acb00) Reply frame received for 1 I0515 00:20:08.733588 7 log.go:172] (0xc0037acb00) (0xc000a440a0) Create stream I0515 00:20:08.733599 7 log.go:172] (0xc0037acb00) (0xc000a440a0) Stream added, broadcasting: 3 I0515 00:20:08.734317 7 log.go:172] (0xc0037acb00) Reply frame received for 3 I0515 00:20:08.734343 7 log.go:172] (0xc0037acb00) (0xc000a441e0) Create stream I0515 00:20:08.734351 7 log.go:172] (0xc0037acb00) (0xc000a441e0) Stream added, broadcasting: 5 I0515 00:20:08.735032 7 log.go:172] (0xc0037acb00) Reply frame received for 5 I0515 00:20:08.812519 7 log.go:172] (0xc0037acb00) Data frame received for 3 I0515 00:20:08.812559 7 log.go:172] (0xc000a440a0) (3) Data frame handling I0515 00:20:08.812579 7 log.go:172] (0xc000a440a0) (3) Data frame sent I0515 00:20:08.813424 7 log.go:172] (0xc0037acb00) Data frame received for 3 I0515 00:20:08.813456 7 log.go:172] (0xc000a440a0) (3) Data frame handling I0515 00:20:08.813780 7 log.go:172] (0xc0037acb00) Data frame received for 5 I0515 00:20:08.813797 7 log.go:172] (0xc000a441e0) (5) Data frame handling I0515 00:20:08.815707 7 log.go:172] (0xc0037acb00) Data frame received for 1 I0515 00:20:08.815728 7 log.go:172] (0xc000a44d20) (1) Data frame handling I0515 00:20:08.815745 7 log.go:172] (0xc000a44d20) (1) Data frame sent I0515 00:20:08.815766 7 log.go:172] (0xc0037acb00) (0xc000a44d20) Stream removed, broadcasting: 1 I0515 00:20:08.815848 7 log.go:172] (0xc0037acb00) (0xc000a44d20) Stream removed, broadcasting: 1 I0515 00:20:08.815865 7 log.go:172] (0xc0037acb00) (0xc000a440a0) Stream removed, broadcasting: 3 I0515 00:20:08.815878 7 log.go:172] (0xc0037acb00) (0xc000a441e0) Stream removed, broadcasting: 5 May 15 00:20:08.815: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:20:08.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0515 00:20:08.816342 7 log.go:172] (0xc0037acb00) Go away received STEP: Destroying namespace "pod-network-test-4689" for this suite. • [SLOW TEST:22.744 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":122,"skipped":1918,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:20:08.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-261997d6-ae29-4044-a595-11f00b29446e STEP: Creating a pod to test consume secrets May 15 00:20:08.905: INFO: Waiting up to 5m0s for pod "pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4" in namespace "secrets-2658" to be "Succeeded or Failed" May 15 00:20:08.933: INFO: Pod "pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.716641ms May 15 00:20:10.937: INFO: Pod "pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032003305s May 15 00:20:12.941: INFO: Pod "pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035924559s STEP: Saw pod success May 15 00:20:12.941: INFO: Pod "pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4" satisfied condition "Succeeded or Failed" May 15 00:20:12.944: INFO: Trying to get logs from node latest-worker pod pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4 container secret-env-test: STEP: delete the pod May 15 00:20:13.266: INFO: Waiting for pod pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4 to disappear May 15 00:20:13.286: INFO: Pod pod-secrets-e7021f00-5c1a-4810-ab16-780344414cb4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:20:13.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2658" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":123,"skipped":1931,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:20:13.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 15 00:20:13.440: INFO: Waiting up to 5m0s for pod "pod-e053008d-5383-4b88-ae74-1c0cf827ed26" in namespace "emptydir-3577" to be "Succeeded or Failed" May 15 00:20:13.448: INFO: Pod "pod-e053008d-5383-4b88-ae74-1c0cf827ed26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49366ms May 15 00:20:15.760: INFO: Pod "pod-e053008d-5383-4b88-ae74-1c0cf827ed26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320723095s May 15 00:20:17.983: INFO: Pod "pod-e053008d-5383-4b88-ae74-1c0cf827ed26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.543033899s May 15 00:20:19.987: INFO: Pod "pod-e053008d-5383-4b88-ae74-1c0cf827ed26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.54782793s STEP: Saw pod success May 15 00:20:19.988: INFO: Pod "pod-e053008d-5383-4b88-ae74-1c0cf827ed26" satisfied condition "Succeeded or Failed" May 15 00:20:19.991: INFO: Trying to get logs from node latest-worker2 pod pod-e053008d-5383-4b88-ae74-1c0cf827ed26 container test-container: STEP: delete the pod May 15 00:20:20.055: INFO: Waiting for pod pod-e053008d-5383-4b88-ae74-1c0cf827ed26 to disappear May 15 00:20:20.057: INFO: Pod pod-e053008d-5383-4b88-ae74-1c0cf827ed26 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:20:20.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3577" for this suite. • [SLOW TEST:6.774 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":124,"skipped":1935,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:20:20.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:20:20.241: INFO: Waiting up to 5m0s for pod "downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a" in namespace "downward-api-425" to be "Succeeded or Failed" May 15 00:20:20.254: INFO: Pod "downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.617673ms May 15 00:20:22.336: INFO: Pod "downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094118943s May 15 00:20:24.339: INFO: Pod "downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097500713s STEP: Saw pod success May 15 00:20:24.339: INFO: Pod "downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a" satisfied condition "Succeeded or Failed" May 15 00:20:24.341: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a container client-container: STEP: delete the pod May 15 00:20:24.402: INFO: Waiting for pod downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a to disappear May 15 00:20:24.457: INFO: Pod downwardapi-volume-549abde4-5010-467e-8ffd-db9fe592b62a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:20:24.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-425" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":125,"skipped":1970,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:20:24.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:20:35.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6588" for this suite. • [SLOW TEST:11.245 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":126,"skipped":1971,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:20:35.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:20:35.790: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1079 I0515 00:20:35.808922 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1079, replica count: 1 I0515 00:20:36.859327 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:20:37.859604 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:20:38.859826 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:20:39.091: INFO: Created: latency-svc-ggkvk May 15 00:20:39.128: INFO: Got endpoints: latency-svc-ggkvk [167.960968ms] May 15 00:20:39.177: INFO: Created: latency-svc-gxlkg May 15 00:20:39.264: INFO: Got endpoints: latency-svc-gxlkg [136.065621ms] May 15 00:20:39.266: INFO: Created: latency-svc-f7ppq May 15 00:20:39.286: INFO: Got endpoints: latency-svc-f7ppq [158.751345ms] May 15 00:20:39.306: INFO: Created: latency-svc-bfsd7 May 15 00:20:39.322: INFO: Got endpoints: latency-svc-bfsd7 [194.033913ms] May 15 00:20:39.342: INFO: Created: latency-svc-kjbn7 May 15 00:20:39.359: INFO: Got endpoints: latency-svc-kjbn7 [231.034461ms] May 15 00:20:39.396: INFO: Created: latency-svc-79srv May 15 00:20:39.400: INFO: Got endpoints: latency-svc-79srv [271.88981ms] May 15 00:20:39.453: INFO: Created: latency-svc-8ttz7 May 15 00:20:39.472: INFO: Got endpoints: latency-svc-8ttz7 [344.283154ms] May 15 00:20:39.492: INFO: Created: latency-svc-l7mzh May 15 00:20:39.534: INFO: Got endpoints: latency-svc-l7mzh [405.970732ms] May 15 00:20:39.539: INFO: Created: latency-svc-wd99b May 15 00:20:39.615: INFO: Got endpoints: latency-svc-wd99b [486.682426ms] May 15 00:20:39.683: INFO: Created: latency-svc-mm76h May 15 00:20:39.698: INFO: Got endpoints: latency-svc-mm76h [570.549375ms] May 15 00:20:39.765: INFO: Created: latency-svc-pwx2j May 15 00:20:39.776: INFO: Got endpoints: latency-svc-pwx2j [648.038435ms] May 15 00:20:39.833: INFO: Created: latency-svc-jx2tm May 15 00:20:39.849: INFO: Got endpoints: latency-svc-jx2tm [720.916196ms] May 15 00:20:39.870: INFO: Created: latency-svc-5r5wz May 15 00:20:39.891: INFO: Got endpoints: latency-svc-5r5wz [763.563704ms] May 15 00:20:39.953: INFO: Created: latency-svc-jtm46 May 15 00:20:39.956: INFO: Got endpoints: latency-svc-jtm46 [828.144933ms] May 15 00:20:39.981: INFO: Created: latency-svc-cr5nw May 15 00:20:39.990: INFO: Got endpoints: latency-svc-cr5nw [862.213451ms] May 15 00:20:40.009: INFO: Created: latency-svc-jz44m May 15 00:20:40.020: INFO: Got endpoints: latency-svc-jz44m [892.239202ms] May 15 00:20:40.038: INFO: Created: latency-svc-cl69v May 15 00:20:40.051: INFO: Got endpoints: latency-svc-cl69v [787.064087ms] May 15 00:20:40.096: INFO: Created: latency-svc-hnw5z May 15 00:20:40.119: INFO: Got endpoints: latency-svc-hnw5z [832.751321ms] May 15 00:20:40.119: INFO: Created: latency-svc-nt6m2 May 15 00:20:40.144: INFO: Got endpoints: latency-svc-nt6m2 [822.154057ms] May 15 00:20:40.173: INFO: Created: latency-svc-gmclh May 15 00:20:40.195: INFO: Got endpoints: latency-svc-gmclh [835.990037ms] May 15 00:20:40.270: INFO: Created: latency-svc-6l26s May 15 00:20:40.276: INFO: Got endpoints: latency-svc-6l26s [876.609897ms] May 15 00:20:40.293: INFO: Created: latency-svc-ncqrt May 15 00:20:40.307: INFO: Got endpoints: latency-svc-ncqrt [834.157473ms] May 15 00:20:40.323: INFO: Created: latency-svc-98xhk May 15 00:20:40.337: INFO: Got endpoints: latency-svc-98xhk [803.458535ms] May 15 00:20:40.353: INFO: Created: latency-svc-v4t2q May 15 00:20:40.367: INFO: Got endpoints: latency-svc-v4t2q [752.523978ms] May 15 00:20:40.470: INFO: Created: latency-svc-zgq29 May 15 00:20:40.501: INFO: Got endpoints: latency-svc-zgq29 [802.079676ms] May 15 00:20:40.554: INFO: Created: latency-svc-mrfrp May 15 00:20:40.623: INFO: Got endpoints: latency-svc-mrfrp [846.931293ms] May 15 00:20:40.658: INFO: Created: latency-svc-9sjc6 May 15 00:20:40.675: INFO: Got endpoints: latency-svc-9sjc6 [826.110636ms] May 15 00:20:40.691: INFO: Created: latency-svc-kh9p6 May 15 00:20:40.707: INFO: Got endpoints: latency-svc-kh9p6 [816.124197ms] May 15 00:20:40.773: INFO: Created: latency-svc-wg6cj May 15 00:20:40.791: INFO: Got endpoints: latency-svc-wg6cj [834.986808ms] May 15 00:20:40.816: INFO: Created: latency-svc-fpw9f May 15 00:20:40.842: INFO: Got endpoints: latency-svc-fpw9f [851.559441ms] May 15 00:20:40.866: INFO: Created: latency-svc-92wnq May 15 00:20:40.904: INFO: Got endpoints: latency-svc-92wnq [884.194452ms] May 15 00:20:40.913: INFO: Created: latency-svc-7mwg9 May 15 00:20:40.926: INFO: Got endpoints: latency-svc-7mwg9 [875.090226ms] May 15 00:20:40.971: INFO: Created: latency-svc-hq8hf May 15 00:20:40.991: INFO: Got endpoints: latency-svc-hq8hf [871.584693ms] May 15 00:20:41.072: INFO: Created: latency-svc-856j6 May 15 00:20:41.087: INFO: Got endpoints: latency-svc-856j6 [942.913641ms] May 15 00:20:41.105: INFO: Created: latency-svc-qjwjx May 15 00:20:41.117: INFO: Got endpoints: latency-svc-qjwjx [921.619464ms] May 15 00:20:41.157: INFO: Created: latency-svc-rw46d May 15 00:20:41.222: INFO: Got endpoints: latency-svc-rw46d [945.851381ms] May 15 00:20:41.243: INFO: Created: latency-svc-t66jd May 15 00:20:41.255: INFO: Got endpoints: latency-svc-t66jd [948.688266ms] May 15 00:20:41.298: INFO: Created: latency-svc-slw97 May 15 00:20:41.309: INFO: Got endpoints: latency-svc-slw97 [971.824696ms] May 15 00:20:41.511: INFO: Created: latency-svc-2wwrd May 15 00:20:41.525: INFO: Got endpoints: latency-svc-2wwrd [1.158093659s] May 15 00:20:41.726: INFO: Created: latency-svc-stqzn May 15 00:20:41.775: INFO: Got endpoints: latency-svc-stqzn [1.27427042s] May 15 00:20:41.777: INFO: Created: latency-svc-pft57 May 15 00:20:41.799: INFO: Got endpoints: latency-svc-pft57 [1.175770221s] May 15 00:20:41.875: INFO: Created: latency-svc-ndfz5 May 15 00:20:41.891: INFO: Got endpoints: latency-svc-ndfz5 [1.216324826s] May 15 00:20:41.933: INFO: Created: latency-svc-zwj4s May 15 00:20:41.953: INFO: Got endpoints: latency-svc-zwj4s [1.245827785s] May 15 00:20:42.065: INFO: Created: latency-svc-j8blr May 15 00:20:42.111: INFO: Got endpoints: latency-svc-j8blr [1.319403069s] May 15 00:20:42.262: INFO: Created: latency-svc-tnt7p May 15 00:20:42.297: INFO: Got endpoints: latency-svc-tnt7p [1.455589136s] May 15 00:20:42.407: INFO: Created: latency-svc-46gtg May 15 00:20:42.415: INFO: Got endpoints: latency-svc-46gtg [1.510609915s] May 15 00:20:42.462: INFO: Created: latency-svc-swjq5 May 15 00:20:42.499: INFO: Got endpoints: latency-svc-swjq5 [1.572909832s] May 15 00:20:42.650: INFO: Created: latency-svc-c9b6q May 15 00:20:42.689: INFO: Got endpoints: latency-svc-c9b6q [1.698227261s] May 15 00:20:42.793: INFO: Created: latency-svc-dg9zk May 15 00:20:42.830: INFO: Got endpoints: latency-svc-dg9zk [1.742773956s] May 15 00:20:42.911: INFO: Created: latency-svc-qmsfl May 15 00:20:42.955: INFO: Got endpoints: latency-svc-qmsfl [1.838584254s] May 15 00:20:42.983: INFO: Created: latency-svc-585bk May 15 00:20:42.998: INFO: Got endpoints: latency-svc-585bk [1.77540556s] May 15 00:20:43.117: INFO: Created: latency-svc-6zh9k May 15 00:20:43.142: INFO: Got endpoints: latency-svc-6zh9k [1.886597939s] May 15 00:20:43.272: INFO: Created: latency-svc-vbq6z May 15 00:20:43.307: INFO: Created: latency-svc-bdktl May 15 00:20:43.307: INFO: Got endpoints: latency-svc-vbq6z [1.997879042s] May 15 00:20:43.444: INFO: Got endpoints: latency-svc-bdktl [1.918140063s] May 15 00:20:43.481: INFO: Created: latency-svc-542zq May 15 00:20:43.497: INFO: Got endpoints: latency-svc-542zq [1.721906724s] May 15 00:20:43.541: INFO: Created: latency-svc-mggbs May 15 00:20:43.583: INFO: Got endpoints: latency-svc-mggbs [1.784546834s] May 15 00:20:43.615: INFO: Created: latency-svc-2rqxt May 15 00:20:43.639: INFO: Got endpoints: latency-svc-2rqxt [1.747912207s] May 15 00:20:43.726: INFO: Created: latency-svc-c5hr6 May 15 00:20:43.777: INFO: Got endpoints: latency-svc-c5hr6 [1.823347013s] May 15 00:20:43.777: INFO: Created: latency-svc-ftp8p May 15 00:20:43.807: INFO: Got endpoints: latency-svc-ftp8p [1.696310755s] May 15 00:20:43.882: INFO: Created: latency-svc-qmw9c May 15 00:20:43.893: INFO: Got endpoints: latency-svc-qmw9c [1.59586389s] May 15 00:20:44.025: INFO: Created: latency-svc-9v7gk May 15 00:20:44.053: INFO: Created: latency-svc-h8ln5 May 15 00:20:44.053: INFO: Got endpoints: latency-svc-9v7gk [1.638297029s] May 15 00:20:44.087: INFO: Got endpoints: latency-svc-h8ln5 [1.587742679s] May 15 00:20:44.372: INFO: Created: latency-svc-tgld9 May 15 00:20:44.386: INFO: Got endpoints: latency-svc-tgld9 [1.696327513s] May 15 00:20:44.409: INFO: Created: latency-svc-5vmk7 May 15 00:20:44.422: INFO: Got endpoints: latency-svc-5vmk7 [1.592247293s] May 15 00:20:44.528: INFO: Created: latency-svc-qljcx May 15 00:20:44.532: INFO: Got endpoints: latency-svc-qljcx [1.57674932s] May 15 00:20:44.592: INFO: Created: latency-svc-ccdzs May 15 00:20:44.617: INFO: Got endpoints: latency-svc-ccdzs [1.619293599s] May 15 00:20:44.674: INFO: Created: latency-svc-c924v May 15 00:20:44.710: INFO: Got endpoints: latency-svc-c924v [1.568153525s] May 15 00:20:44.761: INFO: Created: latency-svc-zqdl5 May 15 00:20:44.827: INFO: Got endpoints: latency-svc-zqdl5 [1.519675155s] May 15 00:20:44.850: INFO: Created: latency-svc-x4p7z May 15 00:20:44.867: INFO: Got endpoints: latency-svc-x4p7z [1.423755773s] May 15 00:20:44.884: INFO: Created: latency-svc-4n5lr May 15 00:20:44.897: INFO: Got endpoints: latency-svc-4n5lr [1.400006284s] May 15 00:20:44.920: INFO: Created: latency-svc-7nwn5 May 15 00:20:44.971: INFO: Got endpoints: latency-svc-7nwn5 [1.387454915s] May 15 00:20:44.988: INFO: Created: latency-svc-mw9gc May 15 00:20:44.999: INFO: Got endpoints: latency-svc-mw9gc [1.359872254s] May 15 00:20:45.049: INFO: Created: latency-svc-5kvfb May 15 00:20:45.132: INFO: Got endpoints: latency-svc-5kvfb [1.355469516s] May 15 00:20:45.166: INFO: Created: latency-svc-kggqg May 15 00:20:45.193: INFO: Got endpoints: latency-svc-kggqg [1.385850526s] May 15 00:20:45.218: INFO: Created: latency-svc-pclxc May 15 00:20:45.276: INFO: Got endpoints: latency-svc-pclxc [1.382883152s] May 15 00:20:45.295: INFO: Created: latency-svc-4l4lh May 15 00:20:45.322: INFO: Got endpoints: latency-svc-4l4lh [1.269077744s] May 15 00:20:45.347: INFO: Created: latency-svc-xqz9z May 15 00:20:45.367: INFO: Got endpoints: latency-svc-xqz9z [1.279755677s] May 15 00:20:45.420: INFO: Created: latency-svc-gvrd7 May 15 00:20:45.439: INFO: Got endpoints: latency-svc-gvrd7 [1.053376907s] May 15 00:20:45.491: INFO: Created: latency-svc-xv4tt May 15 00:20:45.506: INFO: Got endpoints: latency-svc-xv4tt [1.083527746s] May 15 00:20:45.552: INFO: Created: latency-svc-4llpl May 15 00:20:45.574: INFO: Created: latency-svc-bndpm May 15 00:20:45.574: INFO: Got endpoints: latency-svc-4llpl [1.041935011s] May 15 00:20:45.590: INFO: Got endpoints: latency-svc-bndpm [972.865294ms] May 15 00:20:45.625: INFO: Created: latency-svc-9hhqf May 15 00:20:45.638: INFO: Got endpoints: latency-svc-9hhqf [928.207733ms] May 15 00:20:45.719: INFO: Created: latency-svc-fp4cg May 15 00:20:45.723: INFO: Got endpoints: latency-svc-fp4cg [895.6742ms] May 15 00:20:45.778: INFO: Created: latency-svc-mgztb May 15 00:20:45.807: INFO: Got endpoints: latency-svc-mgztb [939.529068ms] May 15 00:20:45.875: INFO: Created: latency-svc-zrqq8 May 15 00:20:45.900: INFO: Got endpoints: latency-svc-zrqq8 [1.003248403s] May 15 00:20:45.946: INFO: Created: latency-svc-x4g5r May 15 00:20:46.037: INFO: Got endpoints: latency-svc-x4g5r [1.066268538s] May 15 00:20:46.041: INFO: Created: latency-svc-2nc2h May 15 00:20:46.075: INFO: Got endpoints: latency-svc-2nc2h [1.075887616s] May 15 00:20:46.099: INFO: Created: latency-svc-hxhbt May 15 00:20:46.110: INFO: Got endpoints: latency-svc-hxhbt [977.532075ms] May 15 00:20:46.132: INFO: Created: latency-svc-tm5vz May 15 00:20:46.240: INFO: Got endpoints: latency-svc-tm5vz [1.047414217s] May 15 00:20:46.243: INFO: Created: latency-svc-9zz7s May 15 00:20:46.254: INFO: Got endpoints: latency-svc-9zz7s [978.046082ms] May 15 00:20:46.272: INFO: Created: latency-svc-x2t9b May 15 00:20:46.285: INFO: Got endpoints: latency-svc-x2t9b [962.457427ms] May 15 00:20:46.302: INFO: Created: latency-svc-xddbl May 15 00:20:46.315: INFO: Got endpoints: latency-svc-xddbl [948.447901ms] May 15 00:20:46.332: INFO: Created: latency-svc-h95g9 May 15 00:20:46.378: INFO: Got endpoints: latency-svc-h95g9 [939.406588ms] May 15 00:20:46.398: INFO: Created: latency-svc-mndll May 15 00:20:46.427: INFO: Got endpoints: latency-svc-mndll [921.358606ms] May 15 00:20:46.479: INFO: Created: latency-svc-89fxg May 15 00:20:46.558: INFO: Got endpoints: latency-svc-89fxg [983.926044ms] May 15 00:20:46.564: INFO: Created: latency-svc-bsg8j May 15 00:20:46.574: INFO: Got endpoints: latency-svc-bsg8j [984.570553ms] May 15 00:20:46.621: INFO: Created: latency-svc-hs4s5 May 15 00:20:46.657: INFO: Got endpoints: latency-svc-hs4s5 [1.018064336s] May 15 00:20:46.726: INFO: Created: latency-svc-9b6ss May 15 00:20:46.730: INFO: Got endpoints: latency-svc-9b6ss [1.007444269s] May 15 00:20:46.762: INFO: Created: latency-svc-hcz26 May 15 00:20:46.766: INFO: Got endpoints: latency-svc-hcz26 [959.241731ms] May 15 00:20:46.794: INFO: Created: latency-svc-6vbr4 May 15 00:20:46.833: INFO: Got endpoints: latency-svc-6vbr4 [933.174951ms] May 15 00:20:46.848: INFO: Created: latency-svc-p9hpf May 15 00:20:46.858: INFO: Got endpoints: latency-svc-p9hpf [820.307683ms] May 15 00:20:46.876: INFO: Created: latency-svc-t7lg8 May 15 00:20:46.888: INFO: Got endpoints: latency-svc-t7lg8 [812.343058ms] May 15 00:20:46.906: INFO: Created: latency-svc-m8lhn May 15 00:20:46.919: INFO: Got endpoints: latency-svc-m8lhn [808.739342ms] May 15 00:20:46.977: INFO: Created: latency-svc-tdplg May 15 00:20:46.981: INFO: Got endpoints: latency-svc-tdplg [740.23107ms] May 15 00:20:47.011: INFO: Created: latency-svc-4hn7f May 15 00:20:47.021: INFO: Got endpoints: latency-svc-4hn7f [766.192183ms] May 15 00:20:47.059: INFO: Created: latency-svc-4hhds May 15 00:20:47.069: INFO: Got endpoints: latency-svc-4hhds [784.029062ms] May 15 00:20:47.115: INFO: Created: latency-svc-hkpfq May 15 00:20:47.134: INFO: Got endpoints: latency-svc-hkpfq [819.256194ms] May 15 00:20:47.166: INFO: Created: latency-svc-f2qsx May 15 00:20:47.177: INFO: Got endpoints: latency-svc-f2qsx [798.739241ms] May 15 00:20:47.194: INFO: Created: latency-svc-f6hmp May 15 00:20:47.212: INFO: Got endpoints: latency-svc-f6hmp [784.701122ms] May 15 00:20:47.252: INFO: Created: latency-svc-gxdmh May 15 00:20:47.287: INFO: Got endpoints: latency-svc-gxdmh [728.463994ms] May 15 00:20:47.287: INFO: Created: latency-svc-2xfm5 May 15 00:20:47.308: INFO: Got endpoints: latency-svc-2xfm5 [733.531783ms] May 15 00:20:47.333: INFO: Created: latency-svc-wl2vz May 15 00:20:47.347: INFO: Got endpoints: latency-svc-wl2vz [690.036061ms] May 15 00:20:47.403: INFO: Created: latency-svc-7gdhn May 15 00:20:47.413: INFO: Got endpoints: latency-svc-7gdhn [682.72107ms] May 15 00:20:47.444: INFO: Created: latency-svc-rbrxn May 15 00:20:47.449: INFO: Got endpoints: latency-svc-rbrxn [682.711014ms] May 15 00:20:47.479: INFO: Created: latency-svc-bbl5n May 15 00:20:47.491: INFO: Got endpoints: latency-svc-bbl5n [657.769926ms] May 15 00:20:47.540: INFO: Created: latency-svc-xqcln May 15 00:20:47.544: INFO: Got endpoints: latency-svc-xqcln [685.876845ms] May 15 00:20:47.573: INFO: Created: latency-svc-8gtls May 15 00:20:47.598: INFO: Got endpoints: latency-svc-8gtls [710.515899ms] May 15 00:20:47.616: INFO: Created: latency-svc-9xx69 May 15 00:20:47.631: INFO: Got endpoints: latency-svc-9xx69 [711.79868ms] May 15 00:20:47.689: INFO: Created: latency-svc-4sgxb May 15 00:20:47.692: INFO: Got endpoints: latency-svc-4sgxb [711.762396ms] May 15 00:20:47.728: INFO: Created: latency-svc-92ns5 May 15 00:20:47.739: INFO: Got endpoints: latency-svc-92ns5 [718.391921ms] May 15 00:20:47.845: INFO: Created: latency-svc-gvd5x May 15 00:20:47.862: INFO: Got endpoints: latency-svc-gvd5x [793.361056ms] May 15 00:20:47.889: INFO: Created: latency-svc-8d6bn May 15 00:20:47.896: INFO: Got endpoints: latency-svc-8d6bn [761.107472ms] May 15 00:20:47.914: INFO: Created: latency-svc-pbdws May 15 00:20:47.925: INFO: Got endpoints: latency-svc-pbdws [748.163571ms] May 15 00:20:47.944: INFO: Created: latency-svc-khlcq May 15 00:20:47.983: INFO: Got endpoints: latency-svc-khlcq [771.32853ms] May 15 00:20:48.025: INFO: Created: latency-svc-s2vzp May 15 00:20:48.035: INFO: Got endpoints: latency-svc-s2vzp [748.169593ms] May 15 00:20:48.055: INFO: Created: latency-svc-4gsmf May 15 00:20:48.065: INFO: Got endpoints: latency-svc-4gsmf [756.655405ms] May 15 00:20:48.143: INFO: Created: latency-svc-7vngf May 15 00:20:48.153: INFO: Got endpoints: latency-svc-7vngf [806.124442ms] May 15 00:20:48.175: INFO: Created: latency-svc-n6znl May 15 00:20:48.189: INFO: Got endpoints: latency-svc-n6znl [776.322624ms] May 15 00:20:48.211: INFO: Created: latency-svc-7z62s May 15 00:20:48.258: INFO: Got endpoints: latency-svc-7z62s [809.261727ms] May 15 00:20:48.272: INFO: Created: latency-svc-9c5d4 May 15 00:20:48.292: INFO: Got endpoints: latency-svc-9c5d4 [800.464763ms] May 15 00:20:48.323: INFO: Created: latency-svc-bw469 May 15 00:20:48.335: INFO: Got endpoints: latency-svc-bw469 [790.960923ms] May 15 00:20:48.396: INFO: Created: latency-svc-dvzfk May 15 00:20:48.400: INFO: Got endpoints: latency-svc-dvzfk [801.580503ms] May 15 00:20:48.427: INFO: Created: latency-svc-z8ddr May 15 00:20:48.456: INFO: Got endpoints: latency-svc-z8ddr [824.903854ms] May 15 00:20:48.479: INFO: Created: latency-svc-pkk96 May 15 00:20:48.539: INFO: Got endpoints: latency-svc-pkk96 [846.832065ms] May 15 00:20:48.580: INFO: Created: latency-svc-k6clx May 15 00:20:48.600: INFO: Got endpoints: latency-svc-k6clx [861.441459ms] May 15 00:20:48.631: INFO: Created: latency-svc-7cgq5 May 15 00:20:48.677: INFO: Got endpoints: latency-svc-7cgq5 [814.664572ms] May 15 00:20:48.691: INFO: Created: latency-svc-qr2hn May 15 00:20:48.727: INFO: Got endpoints: latency-svc-qr2hn [831.127604ms] May 15 00:20:48.820: INFO: Created: latency-svc-wb52z May 15 00:20:48.834: INFO: Got endpoints: latency-svc-wb52z [908.965575ms] May 15 00:20:48.854: INFO: Created: latency-svc-8pv8d May 15 00:20:48.873: INFO: Got endpoints: latency-svc-8pv8d [889.798115ms] May 15 00:20:48.895: INFO: Created: latency-svc-rv9g8 May 15 00:20:48.935: INFO: Got endpoints: latency-svc-rv9g8 [900.197075ms] May 15 00:20:48.952: INFO: Created: latency-svc-mtjqx May 15 00:20:48.967: INFO: Got endpoints: latency-svc-mtjqx [902.140425ms] May 15 00:20:48.988: INFO: Created: latency-svc-wswqq May 15 00:20:49.003: INFO: Got endpoints: latency-svc-wswqq [850.387038ms] May 15 00:20:49.024: INFO: Created: latency-svc-vh54k May 15 00:20:49.085: INFO: Got endpoints: latency-svc-vh54k [895.638541ms] May 15 00:20:49.111: INFO: Created: latency-svc-b77hk May 15 00:20:49.130: INFO: Got endpoints: latency-svc-b77hk [872.026726ms] May 15 00:20:49.154: INFO: Created: latency-svc-5j69f May 15 00:20:49.168: INFO: Got endpoints: latency-svc-5j69f [875.809778ms] May 15 00:20:49.222: INFO: Created: latency-svc-fx4zd May 15 00:20:49.226: INFO: Got endpoints: latency-svc-fx4zd [891.058514ms] May 15 00:20:49.246: INFO: Created: latency-svc-npw8c May 15 00:20:49.270: INFO: Got endpoints: latency-svc-npw8c [869.955968ms] May 15 00:20:49.302: INFO: Created: latency-svc-5xkm6 May 15 00:20:49.317: INFO: Got endpoints: latency-svc-5xkm6 [861.568956ms] May 15 00:20:49.372: INFO: Created: latency-svc-lx8v8 May 15 00:20:49.397: INFO: Got endpoints: latency-svc-lx8v8 [857.283821ms] May 15 00:20:49.397: INFO: Created: latency-svc-57k4q May 15 00:20:49.426: INFO: Got endpoints: latency-svc-57k4q [825.5922ms] May 15 00:20:49.450: INFO: Created: latency-svc-2xq2c May 15 00:20:49.462: INFO: Got endpoints: latency-svc-2xq2c [784.859663ms] May 15 00:20:49.504: INFO: Created: latency-svc-d988z May 15 00:20:49.507: INFO: Got endpoints: latency-svc-d988z [780.287542ms] May 15 00:20:49.536: INFO: Created: latency-svc-bn6hx May 15 00:20:49.567: INFO: Got endpoints: latency-svc-bn6hx [732.037302ms] May 15 00:20:49.635: INFO: Created: latency-svc-qt95s May 15 00:20:49.643: INFO: Got endpoints: latency-svc-qt95s [769.774487ms] May 15 00:20:49.672: INFO: Created: latency-svc-kmtpj May 15 00:20:49.691: INFO: Got endpoints: latency-svc-kmtpj [756.101105ms] May 15 00:20:49.710: INFO: Created: latency-svc-r6jrr May 15 00:20:49.785: INFO: Got endpoints: latency-svc-r6jrr [817.980594ms] May 15 00:20:49.816: INFO: Created: latency-svc-qm9qp May 15 00:20:49.852: INFO: Got endpoints: latency-svc-qm9qp [848.360276ms] May 15 00:20:49.929: INFO: Created: latency-svc-6hkvv May 15 00:20:49.956: INFO: Got endpoints: latency-svc-6hkvv [870.90558ms] May 15 00:20:49.957: INFO: Created: latency-svc-qg8vc May 15 00:20:49.999: INFO: Got endpoints: latency-svc-qg8vc [868.529571ms] May 15 00:20:50.098: INFO: Created: latency-svc-58kf9 May 15 00:20:50.112: INFO: Got endpoints: latency-svc-58kf9 [944.342266ms] May 15 00:20:50.135: INFO: Created: latency-svc-ks7ff May 15 00:20:50.160: INFO: Got endpoints: latency-svc-ks7ff [934.628038ms] May 15 00:20:50.229: INFO: Created: latency-svc-cmpm7 May 15 00:20:50.239: INFO: Got endpoints: latency-svc-cmpm7 [969.071712ms] May 15 00:20:50.266: INFO: Created: latency-svc-bzjxc May 15 00:20:50.275: INFO: Got endpoints: latency-svc-bzjxc [957.761417ms] May 15 00:20:50.296: INFO: Created: latency-svc-9zhpc May 15 00:20:50.299: INFO: Got endpoints: latency-svc-9zhpc [902.266464ms] May 15 00:20:50.378: INFO: Created: latency-svc-xgktj May 15 00:20:50.406: INFO: Got endpoints: latency-svc-xgktj [980.330008ms] May 15 00:20:50.409: INFO: Created: latency-svc-6q2d7 May 15 00:20:50.430: INFO: Got endpoints: latency-svc-6q2d7 [968.335616ms] May 15 00:20:50.554: INFO: Created: latency-svc-xhcqs May 15 00:20:50.577: INFO: Got endpoints: latency-svc-xhcqs [1.069759305s] May 15 00:20:50.604: INFO: Created: latency-svc-dgzbc May 15 00:20:50.659: INFO: Got endpoints: latency-svc-dgzbc [1.092124124s] May 15 00:20:50.698: INFO: Created: latency-svc-xv7wx May 15 00:20:50.715: INFO: Got endpoints: latency-svc-xv7wx [1.071907477s] May 15 00:20:50.786: INFO: Created: latency-svc-c7d8t May 15 00:20:50.793: INFO: Got endpoints: latency-svc-c7d8t [1.101944836s] May 15 00:20:50.851: INFO: Created: latency-svc-6vhd5 May 15 00:20:50.865: INFO: Got endpoints: latency-svc-6vhd5 [1.079813976s] May 15 00:20:50.950: INFO: Created: latency-svc-sdfsp May 15 00:20:50.980: INFO: Got endpoints: latency-svc-sdfsp [1.12795658s] May 15 00:20:51.016: INFO: Created: latency-svc-dpckt May 15 00:20:51.067: INFO: Got endpoints: latency-svc-dpckt [1.11044396s] May 15 00:20:51.070: INFO: Created: latency-svc-k8qpq May 15 00:20:51.096: INFO: Got endpoints: latency-svc-k8qpq [1.097349665s] May 15 00:20:51.130: INFO: Created: latency-svc-9s7lv May 15 00:20:51.142: INFO: Got endpoints: latency-svc-9s7lv [1.030023616s] May 15 00:20:51.160: INFO: Created: latency-svc-g29bb May 15 00:20:51.204: INFO: Got endpoints: latency-svc-g29bb [1.043865462s] May 15 00:20:51.220: INFO: Created: latency-svc-p97hc May 15 00:20:51.247: INFO: Got endpoints: latency-svc-p97hc [1.008183036s] May 15 00:20:51.276: INFO: Created: latency-svc-9r4wz May 15 00:20:51.287: INFO: Got endpoints: latency-svc-9r4wz [1.01238443s] May 15 00:20:51.360: INFO: Created: latency-svc-cmts7 May 15 00:20:51.388: INFO: Got endpoints: latency-svc-cmts7 [1.089112569s] May 15 00:20:51.391: INFO: Created: latency-svc-866bb May 15 00:20:51.402: INFO: Got endpoints: latency-svc-866bb [995.444985ms] May 15 00:20:51.450: INFO: Created: latency-svc-x4xtw May 15 00:20:51.513: INFO: Got endpoints: latency-svc-x4xtw [1.082043701s] May 15 00:20:51.535: INFO: Created: latency-svc-j8shv May 15 00:20:51.559: INFO: Got endpoints: latency-svc-j8shv [981.99631ms] May 15 00:20:51.600: INFO: Created: latency-svc-5gqzm May 15 00:20:51.630: INFO: Got endpoints: latency-svc-5gqzm [970.704561ms] May 15 00:20:51.701: INFO: Created: latency-svc-dnscj May 15 00:20:51.726: INFO: Got endpoints: latency-svc-dnscj [1.011433496s] May 15 00:20:51.797: INFO: Created: latency-svc-6vq28 May 15 00:20:51.805: INFO: Got endpoints: latency-svc-6vq28 [1.01214003s] May 15 00:20:51.867: INFO: Created: latency-svc-w8dft May 15 00:20:51.895: INFO: Got endpoints: latency-svc-w8dft [1.02975683s] May 15 00:20:51.977: INFO: Created: latency-svc-wvntx May 15 00:20:51.991: INFO: Got endpoints: latency-svc-wvntx [1.011340977s] May 15 00:20:52.139: INFO: Created: latency-svc-p9srf May 15 00:20:52.159: INFO: Got endpoints: latency-svc-p9srf [1.092836186s] May 15 00:20:52.188: INFO: Created: latency-svc-4wfkh May 15 00:20:52.222: INFO: Got endpoints: latency-svc-4wfkh [1.125116853s] May 15 00:20:52.305: INFO: Created: latency-svc-98m8k May 15 00:20:52.335: INFO: Got endpoints: latency-svc-98m8k [1.192558614s] May 15 00:20:52.360: INFO: Created: latency-svc-bhhgk May 15 00:20:52.377: INFO: Got endpoints: latency-svc-bhhgk [1.17286579s] May 15 00:20:52.433: INFO: Created: latency-svc-vkh46 May 15 00:20:52.436: INFO: Got endpoints: latency-svc-vkh46 [1.188386719s] May 15 00:20:52.482: INFO: Created: latency-svc-62hws May 15 00:20:52.497: INFO: Got endpoints: latency-svc-62hws [1.209591882s] May 15 00:20:52.527: INFO: Created: latency-svc-v64xn May 15 00:20:52.570: INFO: Got endpoints: latency-svc-v64xn [1.181506211s] May 15 00:20:52.626: INFO: Created: latency-svc-vcgh4 May 15 00:20:52.635: INFO: Got endpoints: latency-svc-vcgh4 [1.233320625s] May 15 00:20:52.664: INFO: Created: latency-svc-5zm9b May 15 00:20:52.731: INFO: Got endpoints: latency-svc-5zm9b [1.218324869s] May 15 00:20:52.734: INFO: Created: latency-svc-5kqrl May 15 00:20:52.762: INFO: Got endpoints: latency-svc-5kqrl [1.20349102s] May 15 00:20:52.798: INFO: Created: latency-svc-xvrb7 May 15 00:20:52.824: INFO: Got endpoints: latency-svc-xvrb7 [1.194654291s] May 15 00:20:52.887: INFO: Created: latency-svc-kvc27 May 15 00:20:52.905: INFO: Got endpoints: latency-svc-kvc27 [1.17885407s] May 15 00:20:52.932: INFO: Created: latency-svc-mtdwr May 15 00:20:52.953: INFO: Got endpoints: latency-svc-mtdwr [1.147935536s] May 15 00:20:53.031: INFO: Created: latency-svc-2q5cx May 15 00:20:53.034: INFO: Got endpoints: latency-svc-2q5cx [1.139425859s] May 15 00:20:53.034: INFO: Latencies: [136.065621ms 158.751345ms 194.033913ms 231.034461ms 271.88981ms 344.283154ms 405.970732ms 486.682426ms 570.549375ms 648.038435ms 657.769926ms 682.711014ms 682.72107ms 685.876845ms 690.036061ms 710.515899ms 711.762396ms 711.79868ms 718.391921ms 720.916196ms 728.463994ms 732.037302ms 733.531783ms 740.23107ms 748.163571ms 748.169593ms 752.523978ms 756.101105ms 756.655405ms 761.107472ms 763.563704ms 766.192183ms 769.774487ms 771.32853ms 776.322624ms 780.287542ms 784.029062ms 784.701122ms 784.859663ms 787.064087ms 790.960923ms 793.361056ms 798.739241ms 800.464763ms 801.580503ms 802.079676ms 803.458535ms 806.124442ms 808.739342ms 809.261727ms 812.343058ms 814.664572ms 816.124197ms 817.980594ms 819.256194ms 820.307683ms 822.154057ms 824.903854ms 825.5922ms 826.110636ms 828.144933ms 831.127604ms 832.751321ms 834.157473ms 834.986808ms 835.990037ms 846.832065ms 846.931293ms 848.360276ms 850.387038ms 851.559441ms 857.283821ms 861.441459ms 861.568956ms 862.213451ms 868.529571ms 869.955968ms 870.90558ms 871.584693ms 872.026726ms 875.090226ms 875.809778ms 876.609897ms 884.194452ms 889.798115ms 891.058514ms 892.239202ms 895.638541ms 895.6742ms 900.197075ms 902.140425ms 902.266464ms 908.965575ms 921.358606ms 921.619464ms 928.207733ms 933.174951ms 934.628038ms 939.406588ms 939.529068ms 942.913641ms 944.342266ms 945.851381ms 948.447901ms 948.688266ms 957.761417ms 959.241731ms 962.457427ms 968.335616ms 969.071712ms 970.704561ms 971.824696ms 972.865294ms 977.532075ms 978.046082ms 980.330008ms 981.99631ms 983.926044ms 984.570553ms 995.444985ms 1.003248403s 1.007444269s 1.008183036s 1.011340977s 1.011433496s 1.01214003s 1.01238443s 1.018064336s 1.02975683s 1.030023616s 1.041935011s 1.043865462s 1.047414217s 1.053376907s 1.066268538s 1.069759305s 1.071907477s 1.075887616s 1.079813976s 1.082043701s 1.083527746s 1.089112569s 1.092124124s 1.092836186s 1.097349665s 1.101944836s 1.11044396s 1.125116853s 1.12795658s 1.139425859s 1.147935536s 1.158093659s 1.17286579s 1.175770221s 1.17885407s 1.181506211s 1.188386719s 1.192558614s 1.194654291s 1.20349102s 1.209591882s 1.216324826s 1.218324869s 1.233320625s 1.245827785s 1.269077744s 1.27427042s 1.279755677s 1.319403069s 1.355469516s 1.359872254s 1.382883152s 1.385850526s 1.387454915s 1.400006284s 1.423755773s 1.455589136s 1.510609915s 1.519675155s 1.568153525s 1.572909832s 1.57674932s 1.587742679s 1.592247293s 1.59586389s 1.619293599s 1.638297029s 1.696310755s 1.696327513s 1.698227261s 1.721906724s 1.742773956s 1.747912207s 1.77540556s 1.784546834s 1.823347013s 1.838584254s 1.886597939s 1.918140063s 1.997879042s] May 15 00:20:53.034: INFO: 50 %ile: 942.913641ms May 15 00:20:53.034: INFO: 90 %ile: 1.572909832s May 15 00:20:53.034: INFO: 99 %ile: 1.918140063s May 15 00:20:53.034: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:20:53.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1079" for this suite. • [SLOW TEST:17.375 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":127,"skipped":2011,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:20:53.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-zvmw STEP: Creating a pod to test atomic-volume-subpath May 15 00:20:53.196: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zvmw" in namespace "subpath-410" to be "Succeeded or Failed" May 15 00:20:53.200: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489411ms May 15 00:20:55.224: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027923487s May 15 00:20:57.227: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030771089s May 15 00:20:59.432: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 6.236199313s May 15 00:21:01.453: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 8.257115337s May 15 00:21:03.474: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 10.277690118s May 15 00:21:05.477: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 12.281130669s May 15 00:21:07.483: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 14.286958181s May 15 00:21:09.491: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 16.295306312s May 15 00:21:11.533: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 18.337493953s May 15 00:21:13.618: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 20.422330486s May 15 00:21:15.630: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Running", Reason="", readiness=true. Elapsed: 22.433718562s May 15 00:21:17.634: INFO: Pod "pod-subpath-test-downwardapi-zvmw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.437644198s STEP: Saw pod success May 15 00:21:17.634: INFO: Pod "pod-subpath-test-downwardapi-zvmw" satisfied condition "Succeeded or Failed" May 15 00:21:17.636: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-zvmw container test-container-subpath-downwardapi-zvmw: STEP: delete the pod May 15 00:21:18.143: INFO: Waiting for pod pod-subpath-test-downwardapi-zvmw to disappear May 15 00:21:18.155: INFO: Pod pod-subpath-test-downwardapi-zvmw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zvmw May 15 00:21:18.155: INFO: Deleting pod "pod-subpath-test-downwardapi-zvmw" in namespace "subpath-410" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:21:18.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-410" for this suite. • [SLOW TEST:25.164 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":128,"skipped":2020,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:21:18.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:22:18.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6599" for this suite. • [SLOW TEST:60.102 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2045,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:22:18.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-540 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-540;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-540 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-540;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-540.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-540.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-540.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-540.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-540.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-540.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-540.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-540.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.97.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.97.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.97.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.97.224_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-540 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-540;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-540 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-540;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-540.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-540.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-540.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-540.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-540.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-540.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-540.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-540.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-540.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-540.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.97.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.97.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.97.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.97.224_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 00:22:26.624: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.627: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.633: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.645: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.664: INFO: Unable to read jessie_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.667: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.669: INFO: Unable to read jessie_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.672: INFO: Unable to read jessie_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.674: INFO: Unable to read jessie_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.678: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.681: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:26.695: INFO: Lookups using dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-540 wheezy_tcp@dns-test-service.dns-540 wheezy_udp@dns-test-service.dns-540.svc wheezy_tcp@dns-test-service.dns-540.svc wheezy_udp@_http._tcp.dns-test-service.dns-540.svc wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-540 jessie_tcp@dns-test-service.dns-540 jessie_udp@dns-test-service.dns-540.svc jessie_tcp@dns-test-service.dns-540.svc jessie_udp@_http._tcp.dns-test-service.dns-540.svc jessie_tcp@_http._tcp.dns-test-service.dns-540.svc] May 15 00:22:31.701: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.706: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.709: INFO: Unable to read wheezy_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.712: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.718: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.721: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.724: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.747: INFO: Unable to read jessie_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.751: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.754: INFO: Unable to read jessie_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.760: INFO: Unable to read jessie_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.763: INFO: Unable to read jessie_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.767: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.770: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:31.791: INFO: Lookups using dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-540 wheezy_tcp@dns-test-service.dns-540 wheezy_udp@dns-test-service.dns-540.svc wheezy_tcp@dns-test-service.dns-540.svc wheezy_udp@_http._tcp.dns-test-service.dns-540.svc wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-540 jessie_tcp@dns-test-service.dns-540 jessie_udp@dns-test-service.dns-540.svc jessie_tcp@dns-test-service.dns-540.svc jessie_udp@_http._tcp.dns-test-service.dns-540.svc jessie_tcp@_http._tcp.dns-test-service.dns-540.svc] May 15 00:22:36.700: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.705: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.708: INFO: Unable to read wheezy_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.711: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.716: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.719: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.722: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.743: INFO: Unable to read jessie_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.746: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.749: INFO: Unable to read jessie_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.751: INFO: Unable to read jessie_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.754: INFO: Unable to read jessie_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.760: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.763: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:36.782: INFO: Lookups using dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-540 wheezy_tcp@dns-test-service.dns-540 wheezy_udp@dns-test-service.dns-540.svc wheezy_tcp@dns-test-service.dns-540.svc wheezy_udp@_http._tcp.dns-test-service.dns-540.svc wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-540 jessie_tcp@dns-test-service.dns-540 jessie_udp@dns-test-service.dns-540.svc jessie_tcp@dns-test-service.dns-540.svc jessie_udp@_http._tcp.dns-test-service.dns-540.svc jessie_tcp@_http._tcp.dns-test-service.dns-540.svc] May 15 00:22:41.699: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.702: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.718: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.720: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.722: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.768: INFO: Unable to read jessie_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.771: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.773: INFO: Unable to read jessie_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.776: INFO: Unable to read jessie_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.779: INFO: Unable to read jessie_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.782: INFO: Unable to read jessie_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.785: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.788: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:41.827: INFO: Lookups using dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-540 wheezy_tcp@dns-test-service.dns-540 wheezy_udp@dns-test-service.dns-540.svc wheezy_tcp@dns-test-service.dns-540.svc wheezy_udp@_http._tcp.dns-test-service.dns-540.svc wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-540 jessie_tcp@dns-test-service.dns-540 jessie_udp@dns-test-service.dns-540.svc jessie_tcp@dns-test-service.dns-540.svc jessie_udp@_http._tcp.dns-test-service.dns-540.svc jessie_tcp@_http._tcp.dns-test-service.dns-540.svc] May 15 00:22:46.701: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.705: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.710: INFO: Unable to read wheezy_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.712: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.718: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.721: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.725: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.748: INFO: Unable to read jessie_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.751: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.755: INFO: Unable to read jessie_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.758: INFO: Unable to read jessie_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.761: INFO: Unable to read jessie_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.767: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.771: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:46.790: INFO: Lookups using dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-540 wheezy_tcp@dns-test-service.dns-540 wheezy_udp@dns-test-service.dns-540.svc wheezy_tcp@dns-test-service.dns-540.svc wheezy_udp@_http._tcp.dns-test-service.dns-540.svc wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-540 jessie_tcp@dns-test-service.dns-540 jessie_udp@dns-test-service.dns-540.svc jessie_tcp@dns-test-service.dns-540.svc jessie_udp@_http._tcp.dns-test-service.dns-540.svc jessie_tcp@_http._tcp.dns-test-service.dns-540.svc] May 15 00:22:51.699: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.703: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.711: INFO: Unable to read wheezy_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.713: INFO: Unable to read wheezy_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.715: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.716: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.729: INFO: Unable to read jessie_udp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.731: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.733: INFO: Unable to read jessie_udp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.735: INFO: Unable to read jessie_tcp@dns-test-service.dns-540 from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.737: INFO: Unable to read jessie_udp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.739: INFO: Unable to read jessie_tcp@dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.741: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.743: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-540.svc from pod dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa: the server could not find the requested resource (get pods dns-test-06790f34-d534-41b1-8f7c-399d23b981fa) May 15 00:22:51.755: INFO: Lookups using dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-540 wheezy_tcp@dns-test-service.dns-540 wheezy_udp@dns-test-service.dns-540.svc wheezy_tcp@dns-test-service.dns-540.svc wheezy_udp@_http._tcp.dns-test-service.dns-540.svc wheezy_tcp@_http._tcp.dns-test-service.dns-540.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-540 jessie_tcp@dns-test-service.dns-540 jessie_udp@dns-test-service.dns-540.svc jessie_tcp@dns-test-service.dns-540.svc jessie_udp@_http._tcp.dns-test-service.dns-540.svc jessie_tcp@_http._tcp.dns-test-service.dns-540.svc] May 15 00:22:56.787: INFO: DNS probes using dns-540/dns-test-06790f34-d534-41b1-8f7c-399d23b981fa succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:22:57.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-540" for this suite. • [SLOW TEST:39.043 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":130,"skipped":2066,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:22:57.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:22:57.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8" in namespace "projected-2980" to be "Succeeded or Failed" May 15 00:22:57.593: INFO: Pod "downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.258219ms May 15 00:22:59.732: INFO: Pod "downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163932769s May 15 00:23:01.775: INFO: Pod "downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206529696s STEP: Saw pod success May 15 00:23:01.775: INFO: Pod "downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8" satisfied condition "Succeeded or Failed" May 15 00:23:01.779: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8 container client-container: STEP: delete the pod May 15 00:23:01.951: INFO: Waiting for pod downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8 to disappear May 15 00:23:01.962: INFO: Pod downwardapi-volume-987f7b9b-97ae-4daf-bbd2-30dc431683e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:01.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2980" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":131,"skipped":2067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:01.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:23:02.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2" in namespace "projected-7230" to be "Succeeded or Failed" May 15 00:23:02.217: INFO: Pod "downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.659097ms May 15 00:23:04.278: INFO: Pod "downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10048817s May 15 00:23:06.456: INFO: Pod "downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.278790261s STEP: Saw pod success May 15 00:23:06.456: INFO: Pod "downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2" satisfied condition "Succeeded or Failed" May 15 00:23:06.460: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2 container client-container: STEP: delete the pod May 15 00:23:06.615: INFO: Waiting for pod downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2 to disappear May 15 00:23:06.627: INFO: Pod downwardapi-volume-c3704cc5-decf-40ee-bb5f-b4c748d86be2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:06.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7230" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2098,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:06.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:23:06.704: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:10.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1853" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":133,"skipped":2108,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:10.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:10.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4076" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":134,"skipped":2130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:11.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:15.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9561" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:15.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-167bc6e3-2758-4ee0-929f-df3aff81a0e4 STEP: Creating a pod to test consume secrets May 15 00:23:15.194: INFO: Waiting up to 5m0s for pod "pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e" in namespace "secrets-8885" to be "Succeeded or Failed" May 15 00:23:15.208: INFO: Pod "pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.452157ms May 15 00:23:17.253: INFO: Pod "pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059751223s May 15 00:23:19.268: INFO: Pod "pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074371172s STEP: Saw pod success May 15 00:23:19.268: INFO: Pod "pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e" satisfied condition "Succeeded or Failed" May 15 00:23:19.270: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e container secret-volume-test: STEP: delete the pod May 15 00:23:19.331: INFO: Waiting for pod pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e to disappear May 15 00:23:19.403: INFO: Pod pod-secrets-ac9fe314-6463-4999-b90e-1a56e949966e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:19.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8885" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:19.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 00:23:23.577: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:23.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2428" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2227,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:23.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:23:23.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8" in namespace "downward-api-4383" to be "Succeeded or Failed" May 15 00:23:23.706: INFO: Pod "downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.841207ms May 15 00:23:25.908: INFO: Pod "downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215780982s May 15 00:23:27.911: INFO: Pod "downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.218844564s STEP: Saw pod success May 15 00:23:27.911: INFO: Pod "downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8" satisfied condition "Succeeded or Failed" May 15 00:23:27.914: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8 container client-container: STEP: delete the pod May 15 00:23:27.955: INFO: Waiting for pod downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8 to disappear May 15 00:23:28.050: INFO: Pod downwardapi-volume-7dc1d2b2-32ca-4f4c-a8ed-2c05130d1db8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:28.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4383" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:28.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 15 00:23:28.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4438' May 15 00:23:31.829: INFO: stderr: "" May 15 00:23:31.829: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 00:23:31.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4438' May 15 00:23:31.991: INFO: stderr: "" May 15 00:23:31.991: INFO: stdout: "update-demo-nautilus-4wln5 update-demo-nautilus-vppsm " May 15 00:23:31.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wln5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:32.160: INFO: stderr: "" May 15 00:23:32.160: INFO: stdout: "" May 15 00:23:32.160: INFO: update-demo-nautilus-4wln5 is created but not running May 15 00:23:37.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4438' May 15 00:23:37.277: INFO: stderr: "" May 15 00:23:37.277: INFO: stdout: "update-demo-nautilus-4wln5 update-demo-nautilus-vppsm " May 15 00:23:37.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wln5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:37.373: INFO: stderr: "" May 15 00:23:37.373: INFO: stdout: "true" May 15 00:23:37.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4wln5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:37.469: INFO: stderr: "" May 15 00:23:37.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 00:23:37.469: INFO: validating pod update-demo-nautilus-4wln5 May 15 00:23:37.481: INFO: got data: { "image": "nautilus.jpg" } May 15 00:23:37.481: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 00:23:37.481: INFO: update-demo-nautilus-4wln5 is verified up and running May 15 00:23:37.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:37.577: INFO: stderr: "" May 15 00:23:37.577: INFO: stdout: "true" May 15 00:23:37.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:37.665: INFO: stderr: "" May 15 00:23:37.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 00:23:37.665: INFO: validating pod update-demo-nautilus-vppsm May 15 00:23:37.668: INFO: got data: { "image": "nautilus.jpg" } May 15 00:23:37.668: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 00:23:37.668: INFO: update-demo-nautilus-vppsm is verified up and running STEP: scaling down the replication controller May 15 00:23:37.671: INFO: scanned /root for discovery docs: May 15 00:23:37.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4438' May 15 00:23:38.809: INFO: stderr: "" May 15 00:23:38.809: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 00:23:38.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4438' May 15 00:23:38.914: INFO: stderr: "" May 15 00:23:38.914: INFO: stdout: "update-demo-nautilus-4wln5 update-demo-nautilus-vppsm " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 00:23:43.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4438' May 15 00:23:44.014: INFO: stderr: "" May 15 00:23:44.014: INFO: stdout: "update-demo-nautilus-4wln5 update-demo-nautilus-vppsm " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 00:23:49.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4438' May 15 00:23:49.118: INFO: stderr: "" May 15 00:23:49.118: INFO: stdout: "update-demo-nautilus-vppsm " May 15 00:23:49.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:49.221: INFO: stderr: "" May 15 00:23:49.221: INFO: stdout: "true" May 15 00:23:49.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:49.320: INFO: stderr: "" May 15 00:23:49.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 00:23:49.320: INFO: validating pod update-demo-nautilus-vppsm May 15 00:23:49.324: INFO: got data: { "image": "nautilus.jpg" } May 15 00:23:49.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 00:23:49.324: INFO: update-demo-nautilus-vppsm is verified up and running STEP: scaling up the replication controller May 15 00:23:49.326: INFO: scanned /root for discovery docs: May 15 00:23:49.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4438' May 15 00:23:50.459: INFO: stderr: "" May 15 00:23:50.459: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 00:23:50.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4438' May 15 00:23:50.700: INFO: stderr: "" May 15 00:23:50.700: INFO: stdout: "update-demo-nautilus-vppsm update-demo-nautilus-vs9cv " May 15 00:23:50.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:50.854: INFO: stderr: "" May 15 00:23:50.854: INFO: stdout: "true" May 15 00:23:50.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:51.236: INFO: stderr: "" May 15 00:23:51.236: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 00:23:51.236: INFO: validating pod update-demo-nautilus-vppsm May 15 00:23:51.240: INFO: got data: { "image": "nautilus.jpg" } May 15 00:23:51.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 00:23:51.240: INFO: update-demo-nautilus-vppsm is verified up and running May 15 00:23:51.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vs9cv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:51.520: INFO: stderr: "" May 15 00:23:51.520: INFO: stdout: "" May 15 00:23:51.520: INFO: update-demo-nautilus-vs9cv is created but not running May 15 00:23:56.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4438' May 15 00:23:56.622: INFO: stderr: "" May 15 00:23:56.622: INFO: stdout: "update-demo-nautilus-vppsm update-demo-nautilus-vs9cv " May 15 00:23:56.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:56.719: INFO: stderr: "" May 15 00:23:56.719: INFO: stdout: "true" May 15 00:23:56.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vppsm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:56.827: INFO: stderr: "" May 15 00:23:56.827: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 00:23:56.827: INFO: validating pod update-demo-nautilus-vppsm May 15 00:23:56.830: INFO: got data: { "image": "nautilus.jpg" } May 15 00:23:56.830: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 00:23:56.830: INFO: update-demo-nautilus-vppsm is verified up and running May 15 00:23:56.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vs9cv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:56.932: INFO: stderr: "" May 15 00:23:56.932: INFO: stdout: "true" May 15 00:23:56.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vs9cv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4438' May 15 00:23:57.058: INFO: stderr: "" May 15 00:23:57.058: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 00:23:57.058: INFO: validating pod update-demo-nautilus-vs9cv May 15 00:23:57.062: INFO: got data: { "image": "nautilus.jpg" } May 15 00:23:57.062: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 00:23:57.062: INFO: update-demo-nautilus-vs9cv is verified up and running STEP: using delete to clean up resources May 15 00:23:57.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4438' May 15 00:23:57.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 00:23:57.197: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 15 00:23:57.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4438' May 15 00:23:57.286: INFO: stderr: "No resources found in kubectl-4438 namespace.\n" May 15 00:23:57.287: INFO: stdout: "" May 15 00:23:57.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4438 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 00:23:57.378: INFO: stderr: "" May 15 00:23:57.378: INFO: stdout: "update-demo-nautilus-vppsm\nupdate-demo-nautilus-vs9cv\n" May 15 00:23:57.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4438' May 15 00:23:58.086: INFO: stderr: "No resources found in kubectl-4438 namespace.\n" May 15 00:23:58.086: INFO: stdout: "" May 15 00:23:58.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4438 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 00:23:58.263: INFO: stderr: "" May 15 00:23:58.263: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:23:58.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4438" for this suite. • [SLOW TEST:30.235 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":139,"skipped":2280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:23:58.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:23:58.755: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 15 00:24:02.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2463 create -f -' May 15 00:24:05.482: INFO: stderr: "" May 15 00:24:05.482: INFO: stdout: "e2e-test-crd-publish-openapi-586-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 15 00:24:05.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2463 delete e2e-test-crd-publish-openapi-586-crds test-cr' May 15 00:24:05.635: INFO: stderr: "" May 15 00:24:05.635: INFO: stdout: "e2e-test-crd-publish-openapi-586-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 15 00:24:05.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2463 apply -f -' May 15 00:24:05.872: INFO: stderr: "" May 15 00:24:05.872: INFO: stdout: "e2e-test-crd-publish-openapi-586-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 15 00:24:05.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2463 delete e2e-test-crd-publish-openapi-586-crds test-cr' May 15 00:24:05.973: INFO: stderr: "" May 15 00:24:05.973: INFO: stdout: "e2e-test-crd-publish-openapi-586-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 15 00:24:05.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-586-crds' May 15 00:24:06.249: INFO: stderr: "" May 15 00:24:06.249: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-586-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:24:09.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2463" for this suite. • [SLOW TEST:10.898 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":140,"skipped":2313,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:24:09.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-2b97d173-ccc8-482d-b810-717300a3768e STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:24:15.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8929" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":141,"skipped":2334,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:24:15.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 15 00:24:15.548: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:15.551: INFO: Number of nodes with available pods: 0 May 15 00:24:15.551: INFO: Node latest-worker is running more than one daemon pod May 15 00:24:16.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:16.559: INFO: Number of nodes with available pods: 0 May 15 00:24:16.559: INFO: Node latest-worker is running more than one daemon pod May 15 00:24:17.622: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:17.626: INFO: Number of nodes with available pods: 0 May 15 00:24:17.626: INFO: Node latest-worker is running more than one daemon pod May 15 00:24:18.627: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:18.631: INFO: Number of nodes with available pods: 0 May 15 00:24:18.631: INFO: Node latest-worker is running more than one daemon pod May 15 00:24:19.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:19.560: INFO: Number of nodes with available pods: 1 May 15 00:24:19.560: INFO: Node latest-worker2 is running more than one daemon pod May 15 00:24:20.558: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:20.564: INFO: Number of nodes with available pods: 2 May 15 00:24:20.564: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 15 00:24:20.662: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:20.812: INFO: Number of nodes with available pods: 1 May 15 00:24:20.812: INFO: Node latest-worker2 is running more than one daemon pod May 15 00:24:21.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:21.881: INFO: Number of nodes with available pods: 1 May 15 00:24:21.881: INFO: Node latest-worker2 is running more than one daemon pod May 15 00:24:22.818: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:22.822: INFO: Number of nodes with available pods: 1 May 15 00:24:22.822: INFO: Node latest-worker2 is running more than one daemon pod May 15 00:24:23.817: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:23.820: INFO: Number of nodes with available pods: 1 May 15 00:24:23.820: INFO: Node latest-worker2 is running more than one daemon pod May 15 00:24:24.817: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:24.820: INFO: Number of nodes with available pods: 1 May 15 00:24:24.820: INFO: Node latest-worker2 is running more than one daemon pod May 15 00:24:25.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:24:25.822: INFO: Number of nodes with available pods: 2 May 15 00:24:25.822: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1701, will wait for the garbage collector to delete the pods May 15 00:24:25.884: INFO: Deleting DaemonSet.extensions daemon-set took: 4.744496ms May 15 00:24:26.184: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.191744ms May 15 00:24:30.088: INFO: Number of nodes with available pods: 0 May 15 00:24:30.088: INFO: Number of running nodes: 0, number of available pods: 0 May 15 00:24:30.122: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1701/daemonsets","resourceVersion":"4678508"},"items":null} May 15 00:24:30.126: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1701/pods","resourceVersion":"4678508"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:24:30.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1701" for this suite. • [SLOW TEST:14.716 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":142,"skipped":2354,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:24:30.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 15 00:24:34.338: INFO: &Pod{ObjectMeta:{send-events-742d753a-cdc0-4f43-9608-c4b21dbf059e events-8237 /api/v1/namespaces/events-8237/pods/send-events-742d753a-cdc0-4f43-9608-c4b21dbf059e 6f05536f-ec97-4a06-b53a-370fc507b58d 4678538 0 2020-05-15 00:24:30 +0000 UTC map[name:foo time:293020286] map[] [] [] [{e2e.test Update v1 2020-05-15 00:24:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-15 00:24:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-94jsx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-94jsx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-94jsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:24:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:24:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:24:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:24:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.194,StartTime:2020-05-15 00:24:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 00:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://9ce70d5b5d3a36c13a9c9cfca39af1df32fd5eeaa31026112fbacda5be6a1dd9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 15 00:24:36.374: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 15 00:24:38.378: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:24:38.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8237" for this suite. • [SLOW TEST:8.290 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":143,"skipped":2363,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:24:38.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7483 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-7483 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7483 May 15 00:24:38.701: INFO: Found 0 stateful pods, waiting for 1 May 15 00:24:48.704: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 15 00:24:48.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:24:48.946: INFO: stderr: "I0515 00:24:48.828558 1449 log.go:172] (0xc00094edc0) (0xc0009c2280) Create stream\nI0515 00:24:48.828613 1449 log.go:172] (0xc00094edc0) (0xc0009c2280) Stream added, broadcasting: 1\nI0515 00:24:48.836710 1449 log.go:172] (0xc00094edc0) Reply frame received for 1\nI0515 00:24:48.836749 1449 log.go:172] (0xc00094edc0) (0xc000834f00) Create stream\nI0515 00:24:48.836759 1449 log.go:172] (0xc00094edc0) (0xc000834f00) Stream added, broadcasting: 3\nI0515 00:24:48.838001 1449 log.go:172] (0xc00094edc0) Reply frame received for 3\nI0515 00:24:48.838164 1449 log.go:172] (0xc00094edc0) (0xc00081a500) Create stream\nI0515 00:24:48.838201 1449 log.go:172] (0xc00094edc0) (0xc00081a500) Stream added, broadcasting: 5\nI0515 00:24:48.840440 1449 log.go:172] (0xc00094edc0) Reply frame received for 5\nI0515 00:24:48.912181 1449 log.go:172] (0xc00094edc0) Data frame received for 5\nI0515 00:24:48.912207 1449 log.go:172] (0xc00081a500) (5) Data frame handling\nI0515 00:24:48.912223 1449 log.go:172] (0xc00081a500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:24:48.940862 1449 log.go:172] (0xc00094edc0) Data frame received for 5\nI0515 00:24:48.940898 1449 log.go:172] (0xc00094edc0) Data frame received for 3\nI0515 00:24:48.940916 1449 log.go:172] (0xc000834f00) (3) Data frame handling\nI0515 00:24:48.940924 1449 log.go:172] (0xc000834f00) (3) Data frame sent\nI0515 00:24:48.940936 1449 log.go:172] (0xc00094edc0) Data frame received for 3\nI0515 00:24:48.940950 1449 log.go:172] (0xc000834f00) (3) Data frame handling\nI0515 00:24:48.940985 1449 log.go:172] (0xc00081a500) (5) Data frame handling\nI0515 00:24:48.942209 1449 log.go:172] (0xc00094edc0) Data frame received for 1\nI0515 00:24:48.942254 1449 log.go:172] (0xc0009c2280) (1) Data frame handling\nI0515 00:24:48.942271 1449 log.go:172] (0xc0009c2280) (1) Data frame sent\nI0515 00:24:48.942281 1449 log.go:172] (0xc00094edc0) (0xc0009c2280) Stream removed, broadcasting: 1\nI0515 00:24:48.942311 1449 log.go:172] (0xc00094edc0) Go away received\nI0515 00:24:48.942610 1449 log.go:172] (0xc00094edc0) (0xc0009c2280) Stream removed, broadcasting: 1\nI0515 00:24:48.942623 1449 log.go:172] (0xc00094edc0) (0xc000834f00) Stream removed, broadcasting: 3\nI0515 00:24:48.942629 1449 log.go:172] (0xc00094edc0) (0xc00081a500) Stream removed, broadcasting: 5\n" May 15 00:24:48.946: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:24:48.946: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:24:48.949: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 15 00:24:58.954: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 00:24:58.954: INFO: Waiting for statefulset status.replicas updated to 0 May 15 00:24:58.972: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:24:58.972: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:24:58.972: INFO: May 15 00:24:58.972: INFO: StatefulSet ss has not reached scale 3, at 1 May 15 00:24:59.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994232889s May 15 00:25:01.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988781693s May 15 00:25:02.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.693703339s May 15 00:25:03.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.689678518s May 15 00:25:04.308: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.685985346s May 15 00:25:05.313: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.658408441s May 15 00:25:06.318: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.653668604s May 15 00:25:07.323: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.648813683s May 15 00:25:08.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 643.268666ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7483 May 15 00:25:09.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:25:09.518: INFO: stderr: "I0515 00:25:09.450997 1468 log.go:172] (0xc000afa840) (0xc0005625a0) Create stream\nI0515 00:25:09.451046 1468 log.go:172] (0xc000afa840) (0xc0005625a0) Stream added, broadcasting: 1\nI0515 00:25:09.453443 1468 log.go:172] (0xc000afa840) Reply frame received for 1\nI0515 00:25:09.453556 1468 log.go:172] (0xc000afa840) (0xc0002ecdc0) Create stream\nI0515 00:25:09.453590 1468 log.go:172] (0xc000afa840) (0xc0002ecdc0) Stream added, broadcasting: 3\nI0515 00:25:09.454749 1468 log.go:172] (0xc000afa840) Reply frame received for 3\nI0515 00:25:09.454787 1468 log.go:172] (0xc000afa840) (0xc0005639a0) Create stream\nI0515 00:25:09.454800 1468 log.go:172] (0xc000afa840) (0xc0005639a0) Stream added, broadcasting: 5\nI0515 00:25:09.455744 1468 log.go:172] (0xc000afa840) Reply frame received for 5\nI0515 00:25:09.510246 1468 log.go:172] (0xc000afa840) Data frame received for 5\nI0515 00:25:09.510290 1468 log.go:172] (0xc0005639a0) (5) Data frame handling\nI0515 00:25:09.510303 1468 log.go:172] (0xc0005639a0) (5) Data frame sent\nI0515 00:25:09.510313 1468 log.go:172] (0xc000afa840) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 00:25:09.510346 1468 log.go:172] (0xc000afa840) Data frame received for 3\nI0515 00:25:09.510405 1468 log.go:172] (0xc0002ecdc0) (3) Data frame handling\nI0515 00:25:09.510421 1468 log.go:172] (0xc0002ecdc0) (3) Data frame sent\nI0515 00:25:09.510434 1468 log.go:172] (0xc000afa840) Data frame received for 3\nI0515 00:25:09.510441 1468 log.go:172] (0xc0002ecdc0) (3) Data frame handling\nI0515 00:25:09.510476 1468 log.go:172] (0xc0005639a0) (5) Data frame handling\nI0515 00:25:09.511900 1468 log.go:172] (0xc000afa840) Data frame received for 1\nI0515 00:25:09.511927 1468 log.go:172] (0xc0005625a0) (1) Data frame handling\nI0515 00:25:09.511950 1468 log.go:172] (0xc0005625a0) (1) Data frame sent\nI0515 00:25:09.511978 1468 log.go:172] (0xc000afa840) (0xc0005625a0) Stream removed, broadcasting: 1\nI0515 00:25:09.511997 1468 log.go:172] (0xc000afa840) Go away received\nI0515 00:25:09.512605 1468 log.go:172] (0xc000afa840) (0xc0005625a0) Stream removed, broadcasting: 1\nI0515 00:25:09.512629 1468 log.go:172] (0xc000afa840) (0xc0002ecdc0) Stream removed, broadcasting: 3\nI0515 00:25:09.512639 1468 log.go:172] (0xc000afa840) (0xc0005639a0) Stream removed, broadcasting: 5\n" May 15 00:25:09.518: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 00:25:09.518: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 00:25:09.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:25:09.743: INFO: stderr: "I0515 00:25:09.661263 1489 log.go:172] (0xc000a8c840) (0xc000568dc0) Create stream\nI0515 00:25:09.661335 1489 log.go:172] (0xc000a8c840) (0xc000568dc0) Stream added, broadcasting: 1\nI0515 00:25:09.663788 1489 log.go:172] (0xc000a8c840) Reply frame received for 1\nI0515 00:25:09.663829 1489 log.go:172] (0xc000a8c840) (0xc000229f40) Create stream\nI0515 00:25:09.663838 1489 log.go:172] (0xc000a8c840) (0xc000229f40) Stream added, broadcasting: 3\nI0515 00:25:09.664857 1489 log.go:172] (0xc000a8c840) Reply frame received for 3\nI0515 00:25:09.664897 1489 log.go:172] (0xc000a8c840) (0xc0002f2c80) Create stream\nI0515 00:25:09.664907 1489 log.go:172] (0xc000a8c840) (0xc0002f2c80) Stream added, broadcasting: 5\nI0515 00:25:09.665932 1489 log.go:172] (0xc000a8c840) Reply frame received for 5\nI0515 00:25:09.735221 1489 log.go:172] (0xc000a8c840) Data frame received for 3\nI0515 00:25:09.735255 1489 log.go:172] (0xc000229f40) (3) Data frame handling\nI0515 00:25:09.735267 1489 log.go:172] (0xc000229f40) (3) Data frame sent\nI0515 00:25:09.735276 1489 log.go:172] (0xc000a8c840) Data frame received for 3\nI0515 00:25:09.735285 1489 log.go:172] (0xc000229f40) (3) Data frame handling\nI0515 00:25:09.735315 1489 log.go:172] (0xc000a8c840) Data frame received for 5\nI0515 00:25:09.735324 1489 log.go:172] (0xc0002f2c80) (5) Data frame handling\nI0515 00:25:09.735334 1489 log.go:172] (0xc0002f2c80) (5) Data frame sent\nI0515 00:25:09.735343 1489 log.go:172] (0xc000a8c840) Data frame received for 5\nI0515 00:25:09.735356 1489 log.go:172] (0xc0002f2c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0515 00:25:09.737280 1489 log.go:172] (0xc000a8c840) Data frame received for 1\nI0515 00:25:09.737308 1489 log.go:172] (0xc000568dc0) (1) Data frame handling\nI0515 00:25:09.737320 1489 log.go:172] (0xc000568dc0) (1) Data frame sent\nI0515 00:25:09.737341 1489 log.go:172] (0xc000a8c840) (0xc000568dc0) Stream removed, broadcasting: 1\nI0515 00:25:09.737410 1489 log.go:172] (0xc000a8c840) Go away received\nI0515 00:25:09.737743 1489 log.go:172] (0xc000a8c840) (0xc000568dc0) Stream removed, broadcasting: 1\nI0515 00:25:09.737765 1489 log.go:172] (0xc000a8c840) (0xc000229f40) Stream removed, broadcasting: 3\nI0515 00:25:09.737776 1489 log.go:172] (0xc000a8c840) (0xc0002f2c80) Stream removed, broadcasting: 5\n" May 15 00:25:09.743: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 00:25:09.743: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 00:25:09.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:25:09.939: INFO: stderr: "I0515 00:25:09.865649 1509 log.go:172] (0xc0009f00b0) (0xc0004a9540) Create stream\nI0515 00:25:09.865703 1509 log.go:172] (0xc0009f00b0) (0xc0004a9540) Stream added, broadcasting: 1\nI0515 00:25:09.867804 1509 log.go:172] (0xc0009f00b0) Reply frame received for 1\nI0515 00:25:09.867857 1509 log.go:172] (0xc0009f00b0) (0xc00043edc0) Create stream\nI0515 00:25:09.867874 1509 log.go:172] (0xc0009f00b0) (0xc00043edc0) Stream added, broadcasting: 3\nI0515 00:25:09.868616 1509 log.go:172] (0xc0009f00b0) Reply frame received for 3\nI0515 00:25:09.868667 1509 log.go:172] (0xc0009f00b0) (0xc0004a99a0) Create stream\nI0515 00:25:09.868692 1509 log.go:172] (0xc0009f00b0) (0xc0004a99a0) Stream added, broadcasting: 5\nI0515 00:25:09.869764 1509 log.go:172] (0xc0009f00b0) Reply frame received for 5\nI0515 00:25:09.931244 1509 log.go:172] (0xc0009f00b0) Data frame received for 3\nI0515 00:25:09.931280 1509 log.go:172] (0xc00043edc0) (3) Data frame handling\nI0515 00:25:09.931310 1509 log.go:172] (0xc0009f00b0) Data frame received for 5\nI0515 00:25:09.931346 1509 log.go:172] (0xc0004a99a0) (5) Data frame handling\nI0515 00:25:09.931359 1509 log.go:172] (0xc0004a99a0) (5) Data frame sent\nI0515 00:25:09.931389 1509 log.go:172] (0xc0009f00b0) Data frame received for 5\nI0515 00:25:09.931417 1509 log.go:172] (0xc0004a99a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0515 00:25:09.931434 1509 log.go:172] (0xc00043edc0) (3) Data frame sent\nI0515 00:25:09.931446 1509 log.go:172] (0xc0009f00b0) Data frame received for 3\nI0515 00:25:09.931461 1509 log.go:172] (0xc00043edc0) (3) Data frame handling\nI0515 00:25:09.932982 1509 log.go:172] (0xc0009f00b0) Data frame received for 1\nI0515 00:25:09.933018 1509 log.go:172] (0xc0004a9540) (1) Data frame handling\nI0515 00:25:09.933064 1509 log.go:172] (0xc0004a9540) (1) Data frame sent\nI0515 00:25:09.933480 1509 log.go:172] (0xc0009f00b0) (0xc0004a9540) Stream removed, broadcasting: 1\nI0515 00:25:09.933513 1509 log.go:172] (0xc0009f00b0) Go away received\nI0515 00:25:09.933889 1509 log.go:172] (0xc0009f00b0) (0xc0004a9540) Stream removed, broadcasting: 1\nI0515 00:25:09.933908 1509 log.go:172] (0xc0009f00b0) (0xc00043edc0) Stream removed, broadcasting: 3\nI0515 00:25:09.933923 1509 log.go:172] (0xc0009f00b0) (0xc0004a99a0) Stream removed, broadcasting: 5\n" May 15 00:25:09.939: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 00:25:09.939: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 00:25:09.943: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 15 00:25:09.943: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 15 00:25:09.943: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 15 00:25:09.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:25:10.162: INFO: stderr: "I0515 00:25:10.093437 1531 log.go:172] (0xc000bf66e0) (0xc00070dea0) Create stream\nI0515 00:25:10.093506 1531 log.go:172] (0xc000bf66e0) (0xc00070dea0) Stream added, broadcasting: 1\nI0515 00:25:10.096073 1531 log.go:172] (0xc000bf66e0) Reply frame received for 1\nI0515 00:25:10.096119 1531 log.go:172] (0xc000bf66e0) (0xc000614500) Create stream\nI0515 00:25:10.096131 1531 log.go:172] (0xc000bf66e0) (0xc000614500) Stream added, broadcasting: 3\nI0515 00:25:10.097296 1531 log.go:172] (0xc000bf66e0) Reply frame received for 3\nI0515 00:25:10.097316 1531 log.go:172] (0xc000bf66e0) (0xc0004e0d20) Create stream\nI0515 00:25:10.097323 1531 log.go:172] (0xc000bf66e0) (0xc0004e0d20) Stream added, broadcasting: 5\nI0515 00:25:10.098327 1531 log.go:172] (0xc000bf66e0) Reply frame received for 5\nI0515 00:25:10.156805 1531 log.go:172] (0xc000bf66e0) Data frame received for 5\nI0515 00:25:10.156861 1531 log.go:172] (0xc0004e0d20) (5) Data frame handling\nI0515 00:25:10.156880 1531 log.go:172] (0xc0004e0d20) (5) Data frame sent\nI0515 00:25:10.156890 1531 log.go:172] (0xc000bf66e0) Data frame received for 5\nI0515 00:25:10.156898 1531 log.go:172] (0xc0004e0d20) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:25:10.156919 1531 log.go:172] (0xc000bf66e0) Data frame received for 3\nI0515 00:25:10.156945 1531 log.go:172] (0xc000614500) (3) Data frame handling\nI0515 00:25:10.156966 1531 log.go:172] (0xc000614500) (3) Data frame sent\nI0515 00:25:10.156978 1531 log.go:172] (0xc000bf66e0) Data frame received for 3\nI0515 00:25:10.156985 1531 log.go:172] (0xc000614500) (3) Data frame handling\nI0515 00:25:10.158190 1531 log.go:172] (0xc000bf66e0) Data frame received for 1\nI0515 00:25:10.158218 1531 log.go:172] (0xc00070dea0) (1) Data frame handling\nI0515 00:25:10.158248 1531 log.go:172] (0xc00070dea0) (1) Data frame sent\nI0515 00:25:10.158271 1531 log.go:172] (0xc000bf66e0) (0xc00070dea0) Stream removed, broadcasting: 1\nI0515 00:25:10.158291 1531 log.go:172] (0xc000bf66e0) Go away received\nI0515 00:25:10.158615 1531 log.go:172] (0xc000bf66e0) (0xc00070dea0) Stream removed, broadcasting: 1\nI0515 00:25:10.158630 1531 log.go:172] (0xc000bf66e0) (0xc000614500) Stream removed, broadcasting: 3\nI0515 00:25:10.158637 1531 log.go:172] (0xc000bf66e0) (0xc0004e0d20) Stream removed, broadcasting: 5\n" May 15 00:25:10.162: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:25:10.162: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:25:10.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:25:10.411: INFO: stderr: "I0515 00:25:10.299528 1551 log.go:172] (0xc00095c0b0) (0xc0004fc8c0) Create stream\nI0515 00:25:10.299583 1551 log.go:172] (0xc00095c0b0) (0xc0004fc8c0) Stream added, broadcasting: 1\nI0515 00:25:10.302171 1551 log.go:172] (0xc00095c0b0) Reply frame received for 1\nI0515 00:25:10.302225 1551 log.go:172] (0xc00095c0b0) (0xc00048ed20) Create stream\nI0515 00:25:10.302239 1551 log.go:172] (0xc00095c0b0) (0xc00048ed20) Stream added, broadcasting: 3\nI0515 00:25:10.303134 1551 log.go:172] (0xc00095c0b0) Reply frame received for 3\nI0515 00:25:10.303170 1551 log.go:172] (0xc00095c0b0) (0xc0004fd4a0) Create stream\nI0515 00:25:10.303181 1551 log.go:172] (0xc00095c0b0) (0xc0004fd4a0) Stream added, broadcasting: 5\nI0515 00:25:10.304031 1551 log.go:172] (0xc00095c0b0) Reply frame received for 5\nI0515 00:25:10.371494 1551 log.go:172] (0xc00095c0b0) Data frame received for 5\nI0515 00:25:10.371532 1551 log.go:172] (0xc0004fd4a0) (5) Data frame handling\nI0515 00:25:10.371551 1551 log.go:172] (0xc0004fd4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:25:10.403976 1551 log.go:172] (0xc00095c0b0) Data frame received for 3\nI0515 00:25:10.404085 1551 log.go:172] (0xc00048ed20) (3) Data frame handling\nI0515 00:25:10.404133 1551 log.go:172] (0xc00048ed20) (3) Data frame sent\nI0515 00:25:10.404153 1551 log.go:172] (0xc00095c0b0) Data frame received for 3\nI0515 00:25:10.404161 1551 log.go:172] (0xc00048ed20) (3) Data frame handling\nI0515 00:25:10.404174 1551 log.go:172] (0xc00095c0b0) Data frame received for 5\nI0515 00:25:10.404181 1551 log.go:172] (0xc0004fd4a0) (5) Data frame handling\nI0515 00:25:10.405968 1551 log.go:172] (0xc00095c0b0) Data frame received for 1\nI0515 00:25:10.405992 1551 log.go:172] (0xc0004fc8c0) (1) Data frame handling\nI0515 00:25:10.406012 1551 log.go:172] (0xc0004fc8c0) (1) Data frame sent\nI0515 00:25:10.406027 1551 log.go:172] (0xc00095c0b0) (0xc0004fc8c0) Stream removed, broadcasting: 1\nI0515 00:25:10.406042 1551 log.go:172] (0xc00095c0b0) Go away received\nI0515 00:25:10.406360 1551 log.go:172] (0xc00095c0b0) (0xc0004fc8c0) Stream removed, broadcasting: 1\nI0515 00:25:10.406378 1551 log.go:172] (0xc00095c0b0) (0xc00048ed20) Stream removed, broadcasting: 3\nI0515 00:25:10.406384 1551 log.go:172] (0xc00095c0b0) (0xc0004fd4a0) Stream removed, broadcasting: 5\n" May 15 00:25:10.411: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:25:10.411: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:25:10.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:25:10.676: INFO: stderr: "I0515 00:25:10.546747 1573 log.go:172] (0xc00098f290) (0xc000a503c0) Create stream\nI0515 00:25:10.546825 1573 log.go:172] (0xc00098f290) (0xc000a503c0) Stream added, broadcasting: 1\nI0515 00:25:10.551565 1573 log.go:172] (0xc00098f290) Reply frame received for 1\nI0515 00:25:10.551606 1573 log.go:172] (0xc00098f290) (0xc000430dc0) Create stream\nI0515 00:25:10.551619 1573 log.go:172] (0xc00098f290) (0xc000430dc0) Stream added, broadcasting: 3\nI0515 00:25:10.552439 1573 log.go:172] (0xc00098f290) Reply frame received for 3\nI0515 00:25:10.552471 1573 log.go:172] (0xc00098f290) (0xc00049a780) Create stream\nI0515 00:25:10.552482 1573 log.go:172] (0xc00098f290) (0xc00049a780) Stream added, broadcasting: 5\nI0515 00:25:10.553640 1573 log.go:172] (0xc00098f290) Reply frame received for 5\nI0515 00:25:10.615384 1573 log.go:172] (0xc00098f290) Data frame received for 5\nI0515 00:25:10.615429 1573 log.go:172] (0xc00049a780) (5) Data frame handling\nI0515 00:25:10.615471 1573 log.go:172] (0xc00049a780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:25:10.668087 1573 log.go:172] (0xc00098f290) Data frame received for 3\nI0515 00:25:10.668119 1573 log.go:172] (0xc000430dc0) (3) Data frame handling\nI0515 00:25:10.668135 1573 log.go:172] (0xc000430dc0) (3) Data frame sent\nI0515 00:25:10.668320 1573 log.go:172] (0xc00098f290) Data frame received for 3\nI0515 00:25:10.668331 1573 log.go:172] (0xc000430dc0) (3) Data frame handling\nI0515 00:25:10.668854 1573 log.go:172] (0xc00098f290) Data frame received for 5\nI0515 00:25:10.668873 1573 log.go:172] (0xc00049a780) (5) Data frame handling\nI0515 00:25:10.670312 1573 log.go:172] (0xc00098f290) Data frame received for 1\nI0515 00:25:10.670323 1573 log.go:172] (0xc000a503c0) (1) Data frame handling\nI0515 00:25:10.670329 1573 log.go:172] (0xc000a503c0) (1) Data frame sent\nI0515 00:25:10.670478 1573 log.go:172] (0xc00098f290) (0xc000a503c0) Stream removed, broadcasting: 1\nI0515 00:25:10.670525 1573 log.go:172] (0xc00098f290) Go away received\nI0515 00:25:10.670900 1573 log.go:172] (0xc00098f290) (0xc000a503c0) Stream removed, broadcasting: 1\nI0515 00:25:10.670921 1573 log.go:172] (0xc00098f290) (0xc000430dc0) Stream removed, broadcasting: 3\nI0515 00:25:10.670933 1573 log.go:172] (0xc00098f290) (0xc00049a780) Stream removed, broadcasting: 5\n" May 15 00:25:10.676: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:25:10.676: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:25:10.676: INFO: Waiting for statefulset status.replicas updated to 0 May 15 00:25:10.679: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 15 00:25:20.688: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 00:25:20.688: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 15 00:25:20.688: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 15 00:25:20.710: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:20.710: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:20.710: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:20.710: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:20.710: INFO: May 15 00:25:20.710: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:21.728: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:21.728: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:21.728: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:21.728: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:21.728: INFO: May 15 00:25:21.728: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:22.765: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:22.765: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:22.765: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:22.765: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:22.765: INFO: May 15 00:25:22.765: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:23.771: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:23.771: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:23.771: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:23.771: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:23.771: INFO: May 15 00:25:23.771: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:24.788: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:24.788: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:24.788: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:24.788: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:24.788: INFO: May 15 00:25:24.788: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:25.792: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:25.792: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:25.792: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:25.792: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:25.792: INFO: May 15 00:25:25.792: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:26.797: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:26.797: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:26.797: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:26.797: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:26.797: INFO: May 15 00:25:26.797: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:27.802: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:27.802: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:27.802: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:27.802: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:27.802: INFO: May 15 00:25:27.802: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:28.807: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:28.807: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:28.807: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:28.807: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:28.807: INFO: May 15 00:25:28.807: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 00:25:29.812: INFO: POD NODE PHASE GRACE CONDITIONS May 15 00:25:29.813: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:38 +0000 UTC }] May 15 00:25:29.813: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:58 +0000 UTC }] May 15 00:25:29.813: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:25:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 00:24:59 +0000 UTC }] May 15 00:25:29.813: INFO: May 15 00:25:29.813: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7483 May 15 00:25:30.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:25:30.957: INFO: rc: 1 May 15 00:25:30.957: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 15 00:25:40.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:25:41.067: INFO: rc: 1 May 15 00:25:41.067: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:25:51.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:25:51.167: INFO: rc: 1 May 15 00:25:51.167: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:26:01.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:26:01.275: INFO: rc: 1 May 15 00:26:01.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:26:11.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:26:11.391: INFO: rc: 1 May 15 00:26:11.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:26:21.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:26:21.503: INFO: rc: 1 May 15 00:26:21.503: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:26:31.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:26:31.601: INFO: rc: 1 May 15 00:26:31.601: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:26:41.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:26:41.706: INFO: rc: 1 May 15 00:26:41.706: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:26:51.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:26:51.804: INFO: rc: 1 May 15 00:26:51.805: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:27:01.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:27:01.915: INFO: rc: 1 May 15 00:27:01.915: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:27:11.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:27:12.024: INFO: rc: 1 May 15 00:27:12.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:27:22.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:27:22.132: INFO: rc: 1 May 15 00:27:22.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:27:32.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:27:32.239: INFO: rc: 1 May 15 00:27:32.239: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:27:42.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:27:42.347: INFO: rc: 1 May 15 00:27:42.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:27:52.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:27:52.445: INFO: rc: 1 May 15 00:27:52.445: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:28:02.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:28:02.532: INFO: rc: 1 May 15 00:28:02.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:28:12.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:28:12.645: INFO: rc: 1 May 15 00:28:12.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:28:22.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:28:22.768: INFO: rc: 1 May 15 00:28:22.768: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:28:32.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:28:32.884: INFO: rc: 1 May 15 00:28:32.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:28:42.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:28:43.003: INFO: rc: 1 May 15 00:28:43.003: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:28:53.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:28:53.094: INFO: rc: 1 May 15 00:28:53.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:29:03.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:29:03.195: INFO: rc: 1 May 15 00:29:03.195: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:29:13.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:29:13.305: INFO: rc: 1 May 15 00:29:13.305: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:29:23.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:29:23.399: INFO: rc: 1 May 15 00:29:23.399: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:29:33.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:29:33.494: INFO: rc: 1 May 15 00:29:33.494: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:29:43.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:29:43.598: INFO: rc: 1 May 15 00:29:43.598: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:29:53.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:29:53.753: INFO: rc: 1 May 15 00:29:53.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:30:03.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:30:03.847: INFO: rc: 1 May 15 00:30:03.847: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:30:13.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:30:13.975: INFO: rc: 1 May 15 00:30:13.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:30:23.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:30:24.072: INFO: rc: 1 May 15 00:30:24.072: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 00:30:34.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7483 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:30:34.178: INFO: rc: 1 May 15 00:30:34.178: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 15 00:30:34.178: INFO: Scaling statefulset ss to 0 May 15 00:30:34.186: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 15 00:30:34.187: INFO: Deleting all statefulset in ns statefulset-7483 May 15 00:30:34.189: INFO: Scaling statefulset ss to 0 May 15 00:30:34.197: INFO: Waiting for statefulset status.replicas updated to 0 May 15 00:30:34.199: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:30:34.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7483" for this suite. • [SLOW TEST:355.780 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":144,"skipped":2367,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:30:34.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 15 00:30:34.346: INFO: Waiting up to 5m0s for pod "downward-api-9fcd7f55-d75d-40e9-95db-17aded553985" in namespace "downward-api-718" to be "Succeeded or Failed" May 15 00:30:34.357: INFO: Pod "downward-api-9fcd7f55-d75d-40e9-95db-17aded553985": Phase="Pending", Reason="", readiness=false. Elapsed: 11.777647ms May 15 00:30:36.468: INFO: Pod "downward-api-9fcd7f55-d75d-40e9-95db-17aded553985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121925948s May 15 00:30:38.472: INFO: Pod "downward-api-9fcd7f55-d75d-40e9-95db-17aded553985": Phase="Running", Reason="", readiness=true. Elapsed: 4.126730248s May 15 00:30:40.476: INFO: Pod "downward-api-9fcd7f55-d75d-40e9-95db-17aded553985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130378997s STEP: Saw pod success May 15 00:30:40.476: INFO: Pod "downward-api-9fcd7f55-d75d-40e9-95db-17aded553985" satisfied condition "Succeeded or Failed" May 15 00:30:40.479: INFO: Trying to get logs from node latest-worker2 pod downward-api-9fcd7f55-d75d-40e9-95db-17aded553985 container dapi-container: STEP: delete the pod May 15 00:30:40.536: INFO: Waiting for pod downward-api-9fcd7f55-d75d-40e9-95db-17aded553985 to disappear May 15 00:30:40.550: INFO: Pod downward-api-9fcd7f55-d75d-40e9-95db-17aded553985 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:30:40.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-718" for this suite. • [SLOW TEST:6.334 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:30:40.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 15 00:30:40.649: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 15 00:30:51.341: INFO: >>> kubeConfig: /root/.kube/config May 15 00:30:54.299: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:31:05.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7248" for this suite. • [SLOW TEST:24.514 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":146,"skipped":2398,"failed":0} [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:31:05.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 15 00:31:05.230: INFO: Created pod &Pod{ObjectMeta:{dns-8055 dns-8055 /api/v1/namespaces/dns-8055/pods/dns-8055 d247aac9-cfef-475e-924a-1929073fd596 4680271 0 2020-05-15 00:31:05 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-15 00:31:05 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wcxs5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wcxs5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wcxs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 00:31:05.233: INFO: The status of Pod dns-8055 is Pending, waiting for it to be Running (with Ready = true) May 15 00:31:07.471: INFO: The status of Pod dns-8055 is Pending, waiting for it to be Running (with Ready = true) May 15 00:31:09.237: INFO: The status of Pod dns-8055 is Pending, waiting for it to be Running (with Ready = true) May 15 00:31:11.237: INFO: The status of Pod dns-8055 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 15 00:31:11.237: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8055 PodName:dns-8055 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:31:11.237: INFO: >>> kubeConfig: /root/.kube/config I0515 00:31:11.274731 7 log.go:172] (0xc006364420) (0xc001a5b2c0) Create stream I0515 00:31:11.274766 7 log.go:172] (0xc006364420) (0xc001a5b2c0) Stream added, broadcasting: 1 I0515 00:31:11.276893 7 log.go:172] (0xc006364420) Reply frame received for 1 I0515 00:31:11.276949 7 log.go:172] (0xc006364420) (0xc001ae40a0) Create stream I0515 00:31:11.276968 7 log.go:172] (0xc006364420) (0xc001ae40a0) Stream added, broadcasting: 3 I0515 00:31:11.278163 7 log.go:172] (0xc006364420) Reply frame received for 3 I0515 00:31:11.278213 7 log.go:172] (0xc006364420) (0xc001a5b400) Create stream I0515 00:31:11.278227 7 log.go:172] (0xc006364420) (0xc001a5b400) Stream added, broadcasting: 5 I0515 00:31:11.279075 7 log.go:172] (0xc006364420) Reply frame received for 5 I0515 00:31:11.382646 7 log.go:172] (0xc006364420) Data frame received for 3 I0515 00:31:11.382688 7 log.go:172] (0xc001ae40a0) (3) Data frame handling I0515 00:31:11.382718 7 log.go:172] (0xc001ae40a0) (3) Data frame sent I0515 00:31:11.383848 7 log.go:172] (0xc006364420) Data frame received for 3 I0515 00:31:11.383874 7 log.go:172] (0xc001ae40a0) (3) Data frame handling I0515 00:31:11.384177 7 log.go:172] (0xc006364420) Data frame received for 5 I0515 00:31:11.384210 7 log.go:172] (0xc001a5b400) (5) Data frame handling I0515 00:31:11.387767 7 log.go:172] (0xc006364420) Data frame received for 1 I0515 00:31:11.387790 7 log.go:172] (0xc001a5b2c0) (1) Data frame handling I0515 00:31:11.387801 7 log.go:172] (0xc001a5b2c0) (1) Data frame sent I0515 00:31:11.387906 7 log.go:172] (0xc006364420) (0xc001a5b2c0) Stream removed, broadcasting: 1 I0515 00:31:11.387968 7 log.go:172] (0xc006364420) Go away received I0515 00:31:11.387998 7 log.go:172] (0xc006364420) (0xc001a5b2c0) Stream removed, broadcasting: 1 I0515 00:31:11.388039 7 log.go:172] (0xc006364420) (0xc001ae40a0) Stream removed, broadcasting: 3 I0515 00:31:11.388063 7 log.go:172] (0xc006364420) (0xc001a5b400) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 15 00:31:11.388: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8055 PodName:dns-8055 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:31:11.388: INFO: >>> kubeConfig: /root/.kube/config I0515 00:31:11.423690 7 log.go:172] (0xc005595550) (0xc00037be00) Create stream I0515 00:31:11.423715 7 log.go:172] (0xc005595550) (0xc00037be00) Stream added, broadcasting: 1 I0515 00:31:11.426040 7 log.go:172] (0xc005595550) Reply frame received for 1 I0515 00:31:11.426076 7 log.go:172] (0xc005595550) (0xc001a5b5e0) Create stream I0515 00:31:11.426089 7 log.go:172] (0xc005595550) (0xc001a5b5e0) Stream added, broadcasting: 3 I0515 00:31:11.427261 7 log.go:172] (0xc005595550) Reply frame received for 3 I0515 00:31:11.427316 7 log.go:172] (0xc005595550) (0xc001a5b680) Create stream I0515 00:31:11.427334 7 log.go:172] (0xc005595550) (0xc001a5b680) Stream added, broadcasting: 5 I0515 00:31:11.428481 7 log.go:172] (0xc005595550) Reply frame received for 5 I0515 00:31:11.503742 7 log.go:172] (0xc005595550) Data frame received for 3 I0515 00:31:11.503772 7 log.go:172] (0xc001a5b5e0) (3) Data frame handling I0515 00:31:11.503800 7 log.go:172] (0xc001a5b5e0) (3) Data frame sent I0515 00:31:11.505545 7 log.go:172] (0xc005595550) Data frame received for 3 I0515 00:31:11.505574 7 log.go:172] (0xc001a5b5e0) (3) Data frame handling I0515 00:31:11.505703 7 log.go:172] (0xc005595550) Data frame received for 5 I0515 00:31:11.505723 7 log.go:172] (0xc001a5b680) (5) Data frame handling I0515 00:31:11.507078 7 log.go:172] (0xc005595550) Data frame received for 1 I0515 00:31:11.507100 7 log.go:172] (0xc00037be00) (1) Data frame handling I0515 00:31:11.507115 7 log.go:172] (0xc00037be00) (1) Data frame sent I0515 00:31:11.507138 7 log.go:172] (0xc005595550) (0xc00037be00) Stream removed, broadcasting: 1 I0515 00:31:11.507226 7 log.go:172] (0xc005595550) (0xc00037be00) Stream removed, broadcasting: 1 I0515 00:31:11.507242 7 log.go:172] (0xc005595550) (0xc001a5b5e0) Stream removed, broadcasting: 3 I0515 00:31:11.507386 7 log.go:172] (0xc005595550) (0xc001a5b680) Stream removed, broadcasting: 5 May 15 00:31:11.507: INFO: Deleting pod dns-8055... I0515 00:31:11.507942 7 log.go:172] (0xc005595550) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:31:11.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8055" for this suite. • [SLOW TEST:6.530 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":147,"skipped":2398,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:31:11.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:31:12.091: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:31:13.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1785" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":148,"skipped":2399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:31:13.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:31:14.456: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:31:16.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:31:18.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099474, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:31:21.573: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:31:21.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3293-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:31:22.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6381" for this suite. STEP: Destroying namespace "webhook-6381-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.727 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":149,"skipped":2424,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:31:22.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2558 May 15 00:31:26.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2558 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 15 00:31:27.227: INFO: stderr: "I0515 00:31:27.123494 2199 log.go:172] (0xc000c360b0) (0xc000356be0) Create stream\nI0515 00:31:27.123559 2199 log.go:172] (0xc000c360b0) (0xc000356be0) Stream added, broadcasting: 1\nI0515 00:31:27.126239 2199 log.go:172] (0xc000c360b0) Reply frame received for 1\nI0515 00:31:27.126268 2199 log.go:172] (0xc000c360b0) (0xc000357220) Create stream\nI0515 00:31:27.126281 2199 log.go:172] (0xc000c360b0) (0xc000357220) Stream added, broadcasting: 3\nI0515 00:31:27.127224 2199 log.go:172] (0xc000c360b0) Reply frame received for 3\nI0515 00:31:27.127272 2199 log.go:172] (0xc000c360b0) (0xc0006b0b40) Create stream\nI0515 00:31:27.127284 2199 log.go:172] (0xc000c360b0) (0xc0006b0b40) Stream added, broadcasting: 5\nI0515 00:31:27.128205 2199 log.go:172] (0xc000c360b0) Reply frame received for 5\nI0515 00:31:27.213350 2199 log.go:172] (0xc000c360b0) Data frame received for 5\nI0515 00:31:27.213388 2199 log.go:172] (0xc0006b0b40) (5) Data frame handling\nI0515 00:31:27.213414 2199 log.go:172] (0xc0006b0b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0515 00:31:27.217712 2199 log.go:172] (0xc000c360b0) Data frame received for 3\nI0515 00:31:27.217735 2199 log.go:172] (0xc000357220) (3) Data frame handling\nI0515 00:31:27.217747 2199 log.go:172] (0xc000357220) (3) Data frame sent\nI0515 00:31:27.218216 2199 log.go:172] (0xc000c360b0) Data frame received for 3\nI0515 00:31:27.218240 2199 log.go:172] (0xc000357220) (3) Data frame handling\nI0515 00:31:27.218280 2199 log.go:172] (0xc000c360b0) Data frame received for 5\nI0515 00:31:27.218303 2199 log.go:172] (0xc0006b0b40) (5) Data frame handling\nI0515 00:31:27.222722 2199 log.go:172] (0xc000c360b0) Data frame received for 1\nI0515 00:31:27.222764 2199 log.go:172] (0xc000356be0) (1) Data frame handling\nI0515 00:31:27.222794 2199 log.go:172] (0xc000356be0) (1) Data frame sent\nI0515 00:31:27.222816 2199 log.go:172] (0xc000c360b0) (0xc000356be0) Stream removed, broadcasting: 1\nI0515 00:31:27.222833 2199 log.go:172] (0xc000c360b0) Go away received\nI0515 00:31:27.223258 2199 log.go:172] (0xc000c360b0) (0xc000356be0) Stream removed, broadcasting: 1\nI0515 00:31:27.223303 2199 log.go:172] (0xc000c360b0) (0xc000357220) Stream removed, broadcasting: 3\nI0515 00:31:27.223319 2199 log.go:172] (0xc000c360b0) (0xc0006b0b40) Stream removed, broadcasting: 5\n" May 15 00:31:27.227: INFO: stdout: "iptables" May 15 00:31:27.227: INFO: proxyMode: iptables May 15 00:31:27.250: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:31:27.265: INFO: Pod kube-proxy-mode-detector still exists May 15 00:31:29.265: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:31:29.269: INFO: Pod kube-proxy-mode-detector still exists May 15 00:31:31.265: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:31:31.268: INFO: Pod kube-proxy-mode-detector still exists May 15 00:31:33.265: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:31:33.269: INFO: Pod kube-proxy-mode-detector still exists May 15 00:31:35.265: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:31:35.268: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2558 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2558 I0515 00:31:35.360442 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2558, replica count: 3 I0515 00:31:38.410858 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:31:41.411097 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:31:41.418: INFO: Creating new exec pod May 15 00:31:46.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2558 execpod-affinityv7w6p -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 15 00:31:46.752: INFO: stderr: "I0515 00:31:46.648307 2219 log.go:172] (0xc0009f80b0) (0xc0004fc5a0) Create stream\nI0515 00:31:46.648350 2219 log.go:172] (0xc0009f80b0) (0xc0004fc5a0) Stream added, broadcasting: 1\nI0515 00:31:46.651002 2219 log.go:172] (0xc0009f80b0) Reply frame received for 1\nI0515 00:31:46.651072 2219 log.go:172] (0xc0009f80b0) (0xc00034adc0) Create stream\nI0515 00:31:46.651091 2219 log.go:172] (0xc0009f80b0) (0xc00034adc0) Stream added, broadcasting: 3\nI0515 00:31:46.652144 2219 log.go:172] (0xc0009f80b0) Reply frame received for 3\nI0515 00:31:46.652203 2219 log.go:172] (0xc0009f80b0) (0xc000528140) Create stream\nI0515 00:31:46.652230 2219 log.go:172] (0xc0009f80b0) (0xc000528140) Stream added, broadcasting: 5\nI0515 00:31:46.653386 2219 log.go:172] (0xc0009f80b0) Reply frame received for 5\nI0515 00:31:46.730048 2219 log.go:172] (0xc0009f80b0) Data frame received for 5\nI0515 00:31:46.730081 2219 log.go:172] (0xc000528140) (5) Data frame handling\nI0515 00:31:46.730117 2219 log.go:172] (0xc000528140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0515 00:31:46.744141 2219 log.go:172] (0xc0009f80b0) Data frame received for 5\nI0515 00:31:46.744175 2219 log.go:172] (0xc000528140) (5) Data frame handling\nI0515 00:31:46.744205 2219 log.go:172] (0xc000528140) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0515 00:31:46.744336 2219 log.go:172] (0xc0009f80b0) Data frame received for 3\nI0515 00:31:46.744357 2219 log.go:172] (0xc00034adc0) (3) Data frame handling\nI0515 00:31:46.744406 2219 log.go:172] (0xc0009f80b0) Data frame received for 5\nI0515 00:31:46.744430 2219 log.go:172] (0xc000528140) (5) Data frame handling\nI0515 00:31:46.746763 2219 log.go:172] (0xc0009f80b0) Data frame received for 1\nI0515 00:31:46.746799 2219 log.go:172] (0xc0004fc5a0) (1) Data frame handling\nI0515 00:31:46.746822 2219 log.go:172] (0xc0004fc5a0) (1) Data frame sent\nI0515 00:31:46.746851 2219 log.go:172] (0xc0009f80b0) (0xc0004fc5a0) Stream removed, broadcasting: 1\nI0515 00:31:46.746931 2219 log.go:172] (0xc0009f80b0) Go away received\nI0515 00:31:46.747239 2219 log.go:172] (0xc0009f80b0) (0xc0004fc5a0) Stream removed, broadcasting: 1\nI0515 00:31:46.747261 2219 log.go:172] (0xc0009f80b0) (0xc00034adc0) Stream removed, broadcasting: 3\nI0515 00:31:46.747273 2219 log.go:172] (0xc0009f80b0) (0xc000528140) Stream removed, broadcasting: 5\n" May 15 00:31:46.752: INFO: stdout: "" May 15 00:31:46.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2558 execpod-affinityv7w6p -- /bin/sh -x -c nc -zv -t -w 2 10.108.10.210 80' May 15 00:31:46.983: INFO: stderr: "I0515 00:31:46.893831 2240 log.go:172] (0xc000a9f550) (0xc000ac6640) Create stream\nI0515 00:31:46.893893 2240 log.go:172] (0xc000a9f550) (0xc000ac6640) Stream added, broadcasting: 1\nI0515 00:31:46.898506 2240 log.go:172] (0xc000a9f550) Reply frame received for 1\nI0515 00:31:46.898586 2240 log.go:172] (0xc000a9f550) (0xc0006f0500) Create stream\nI0515 00:31:46.898621 2240 log.go:172] (0xc000a9f550) (0xc0006f0500) Stream added, broadcasting: 3\nI0515 00:31:46.899782 2240 log.go:172] (0xc000a9f550) Reply frame received for 3\nI0515 00:31:46.899813 2240 log.go:172] (0xc000a9f550) (0xc00052ed20) Create stream\nI0515 00:31:46.899822 2240 log.go:172] (0xc000a9f550) (0xc00052ed20) Stream added, broadcasting: 5\nI0515 00:31:46.900996 2240 log.go:172] (0xc000a9f550) Reply frame received for 5\nI0515 00:31:46.975975 2240 log.go:172] (0xc000a9f550) Data frame received for 3\nI0515 00:31:46.976019 2240 log.go:172] (0xc0006f0500) (3) Data frame handling\nI0515 00:31:46.976059 2240 log.go:172] (0xc000a9f550) Data frame received for 5\nI0515 00:31:46.976073 2240 log.go:172] (0xc00052ed20) (5) Data frame handling\nI0515 00:31:46.976090 2240 log.go:172] (0xc00052ed20) (5) Data frame sent\nI0515 00:31:46.976101 2240 log.go:172] (0xc000a9f550) Data frame received for 5\nI0515 00:31:46.976107 2240 log.go:172] (0xc00052ed20) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.10.210 80\nConnection to 10.108.10.210 80 port [tcp/http] succeeded!\nI0515 00:31:46.977732 2240 log.go:172] (0xc000a9f550) Data frame received for 1\nI0515 00:31:46.977752 2240 log.go:172] (0xc000ac6640) (1) Data frame handling\nI0515 00:31:46.977774 2240 log.go:172] (0xc000ac6640) (1) Data frame sent\nI0515 00:31:46.977788 2240 log.go:172] (0xc000a9f550) (0xc000ac6640) Stream removed, broadcasting: 1\nI0515 00:31:46.977805 2240 log.go:172] (0xc000a9f550) Go away received\nI0515 00:31:46.978181 2240 log.go:172] (0xc000a9f550) (0xc000ac6640) Stream removed, broadcasting: 1\nI0515 00:31:46.978197 2240 log.go:172] (0xc000a9f550) (0xc0006f0500) Stream removed, broadcasting: 3\nI0515 00:31:46.978205 2240 log.go:172] (0xc000a9f550) (0xc00052ed20) Stream removed, broadcasting: 5\n" May 15 00:31:46.983: INFO: stdout: "" May 15 00:31:46.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2558 execpod-affinityv7w6p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.10.210:80/ ; done' May 15 00:31:47.292: INFO: stderr: "I0515 00:31:47.125829 2260 log.go:172] (0xc000928000) (0xc000526d20) Create stream\nI0515 00:31:47.125882 2260 log.go:172] (0xc000928000) (0xc000526d20) Stream added, broadcasting: 1\nI0515 00:31:47.127334 2260 log.go:172] (0xc000928000) Reply frame received for 1\nI0515 00:31:47.127385 2260 log.go:172] (0xc000928000) (0xc0000dee60) Create stream\nI0515 00:31:47.127404 2260 log.go:172] (0xc000928000) (0xc0000dee60) Stream added, broadcasting: 3\nI0515 00:31:47.128033 2260 log.go:172] (0xc000928000) Reply frame received for 3\nI0515 00:31:47.128070 2260 log.go:172] (0xc000928000) (0xc0003a8460) Create stream\nI0515 00:31:47.128087 2260 log.go:172] (0xc000928000) (0xc0003a8460) Stream added, broadcasting: 5\nI0515 00:31:47.128862 2260 log.go:172] (0xc000928000) Reply frame received for 5\nI0515 00:31:47.192613 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.192656 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.192671 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.192688 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.192697 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.192705 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.198988 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.199019 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.199058 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.199256 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.199277 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.199309 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.199334 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.199351 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.199362 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.202879 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.202909 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.202926 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.203397 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.203411 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.203419 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.203437 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.203452 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.203464 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.208701 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.208720 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.208732 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.208962 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.208987 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.209013 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.209022 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.209033 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.209040 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.214762 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.214790 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.214806 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.215824 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.215837 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.215862 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.215875 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.215882 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.215897 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.222998 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.223014 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.223024 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.223323 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.223344 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.223352 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.223361 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.223366 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.223374 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.227974 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.227998 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.228018 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.228473 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.228495 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.228502 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.228523 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.228553 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.228581 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.235116 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.235159 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.235208 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.235585 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.235599 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.235606 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.235616 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.235636 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.235648 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.241460 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.241476 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.241489 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.242181 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.242198 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.242217 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.242226 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.242236 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.242247 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\nI0515 00:31:47.242254 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.242264 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.242314 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\nI0515 00:31:47.246910 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.246932 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.246946 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.247345 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.247361 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.247372 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.247390 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.247401 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.247411 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.252930 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.252943 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.252955 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.253745 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.253770 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.253781 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.253805 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.253833 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.253852 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.257731 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.257747 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.257764 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.258301 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.258318 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.258330 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.258336 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.258347 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.258352 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.262421 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.262436 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.262455 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.263125 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.263145 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.263170 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.263300 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.263331 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.263364 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.267769 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.267787 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.267799 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.268231 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.268257 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.268283 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.268426 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.268447 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.268462 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.273627 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.273647 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.273668 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.274334 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.274362 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.274381 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.274404 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.274421 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.274436 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.279232 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.279250 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.279281 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.279745 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.279765 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.279780 2260 log.go:172] (0xc0003a8460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.279841 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.279857 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.279867 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.284613 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.284634 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.284655 2260 log.go:172] (0xc0000dee60) (3) Data frame sent\nI0515 00:31:47.285433 2260 log.go:172] (0xc000928000) Data frame received for 3\nI0515 00:31:47.285453 2260 log.go:172] (0xc0000dee60) (3) Data frame handling\nI0515 00:31:47.285961 2260 log.go:172] (0xc000928000) Data frame received for 5\nI0515 00:31:47.285974 2260 log.go:172] (0xc0003a8460) (5) Data frame handling\nI0515 00:31:47.287481 2260 log.go:172] (0xc000928000) Data frame received for 1\nI0515 00:31:47.287512 2260 log.go:172] (0xc000526d20) (1) Data frame handling\nI0515 00:31:47.287539 2260 log.go:172] (0xc000526d20) (1) Data frame sent\nI0515 00:31:47.287577 2260 log.go:172] (0xc000928000) (0xc000526d20) Stream removed, broadcasting: 1\nI0515 00:31:47.287601 2260 log.go:172] (0xc000928000) Go away received\nI0515 00:31:47.287925 2260 log.go:172] (0xc000928000) (0xc000526d20) Stream removed, broadcasting: 1\nI0515 00:31:47.287938 2260 log.go:172] (0xc000928000) (0xc0000dee60) Stream removed, broadcasting: 3\nI0515 00:31:47.287945 2260 log.go:172] (0xc000928000) (0xc0003a8460) Stream removed, broadcasting: 5\n" May 15 00:31:47.292: INFO: stdout: "\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw\naffinity-clusterip-timeout-cvllw" May 15 00:31:47.292: INFO: Received response from host: May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Received response from host: affinity-clusterip-timeout-cvllw May 15 00:31:47.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2558 execpod-affinityv7w6p -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.108.10.210:80/' May 15 00:31:47.523: INFO: stderr: "I0515 00:31:47.414295 2280 log.go:172] (0xc00097e0b0) (0xc000379360) Create stream\nI0515 00:31:47.414358 2280 log.go:172] (0xc00097e0b0) (0xc000379360) Stream added, broadcasting: 1\nI0515 00:31:47.416620 2280 log.go:172] (0xc00097e0b0) Reply frame received for 1\nI0515 00:31:47.416670 2280 log.go:172] (0xc00097e0b0) (0xc000321360) Create stream\nI0515 00:31:47.416684 2280 log.go:172] (0xc00097e0b0) (0xc000321360) Stream added, broadcasting: 3\nI0515 00:31:47.417819 2280 log.go:172] (0xc00097e0b0) Reply frame received for 3\nI0515 00:31:47.417860 2280 log.go:172] (0xc00097e0b0) (0xc00044a140) Create stream\nI0515 00:31:47.417875 2280 log.go:172] (0xc00097e0b0) (0xc00044a140) Stream added, broadcasting: 5\nI0515 00:31:47.419481 2280 log.go:172] (0xc00097e0b0) Reply frame received for 5\nI0515 00:31:47.511813 2280 log.go:172] (0xc00097e0b0) Data frame received for 5\nI0515 00:31:47.511842 2280 log.go:172] (0xc00044a140) (5) Data frame handling\nI0515 00:31:47.511866 2280 log.go:172] (0xc00044a140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:31:47.515174 2280 log.go:172] (0xc00097e0b0) Data frame received for 3\nI0515 00:31:47.515196 2280 log.go:172] (0xc000321360) (3) Data frame handling\nI0515 00:31:47.515209 2280 log.go:172] (0xc000321360) (3) Data frame sent\nI0515 00:31:47.516028 2280 log.go:172] (0xc00097e0b0) Data frame received for 5\nI0515 00:31:47.516065 2280 log.go:172] (0xc00044a140) (5) Data frame handling\nI0515 00:31:47.516089 2280 log.go:172] (0xc00097e0b0) Data frame received for 3\nI0515 00:31:47.516107 2280 log.go:172] (0xc000321360) (3) Data frame handling\nI0515 00:31:47.517715 2280 log.go:172] (0xc00097e0b0) Data frame received for 1\nI0515 00:31:47.517747 2280 log.go:172] (0xc000379360) (1) Data frame handling\nI0515 00:31:47.517772 2280 log.go:172] (0xc000379360) (1) Data frame sent\nI0515 00:31:47.517806 2280 log.go:172] (0xc00097e0b0) (0xc000379360) Stream removed, broadcasting: 1\nI0515 00:31:47.517912 2280 log.go:172] (0xc00097e0b0) Go away received\nI0515 00:31:47.518293 2280 log.go:172] (0xc00097e0b0) (0xc000379360) Stream removed, broadcasting: 1\nI0515 00:31:47.518340 2280 log.go:172] (0xc00097e0b0) (0xc000321360) Stream removed, broadcasting: 3\nI0515 00:31:47.518359 2280 log.go:172] (0xc00097e0b0) (0xc00044a140) Stream removed, broadcasting: 5\n" May 15 00:31:47.523: INFO: stdout: "affinity-clusterip-timeout-cvllw" May 15 00:32:02.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2558 execpod-affinityv7w6p -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.108.10.210:80/' May 15 00:32:02.778: INFO: stderr: "I0515 00:32:02.670086 2301 log.go:172] (0xc000a5f760) (0xc00054dc20) Create stream\nI0515 00:32:02.670144 2301 log.go:172] (0xc000a5f760) (0xc00054dc20) Stream added, broadcasting: 1\nI0515 00:32:02.673072 2301 log.go:172] (0xc000a5f760) Reply frame received for 1\nI0515 00:32:02.673235 2301 log.go:172] (0xc000a5f760) (0xc000324320) Create stream\nI0515 00:32:02.673259 2301 log.go:172] (0xc000a5f760) (0xc000324320) Stream added, broadcasting: 3\nI0515 00:32:02.674594 2301 log.go:172] (0xc000a5f760) Reply frame received for 3\nI0515 00:32:02.674656 2301 log.go:172] (0xc000a5f760) (0xc0001f20a0) Create stream\nI0515 00:32:02.674677 2301 log.go:172] (0xc000a5f760) (0xc0001f20a0) Stream added, broadcasting: 5\nI0515 00:32:02.675867 2301 log.go:172] (0xc000a5f760) Reply frame received for 5\nI0515 00:32:02.768489 2301 log.go:172] (0xc000a5f760) Data frame received for 5\nI0515 00:32:02.768530 2301 log.go:172] (0xc0001f20a0) (5) Data frame handling\nI0515 00:32:02.768555 2301 log.go:172] (0xc0001f20a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.108.10.210:80/\nI0515 00:32:02.771333 2301 log.go:172] (0xc000a5f760) Data frame received for 3\nI0515 00:32:02.771355 2301 log.go:172] (0xc000324320) (3) Data frame handling\nI0515 00:32:02.771379 2301 log.go:172] (0xc000324320) (3) Data frame sent\nI0515 00:32:02.772474 2301 log.go:172] (0xc000a5f760) Data frame received for 5\nI0515 00:32:02.772507 2301 log.go:172] (0xc000a5f760) Data frame received for 3\nI0515 00:32:02.772542 2301 log.go:172] (0xc000324320) (3) Data frame handling\nI0515 00:32:02.772580 2301 log.go:172] (0xc0001f20a0) (5) Data frame handling\nI0515 00:32:02.774419 2301 log.go:172] (0xc000a5f760) Data frame received for 1\nI0515 00:32:02.774450 2301 log.go:172] (0xc00054dc20) (1) Data frame handling\nI0515 00:32:02.774465 2301 log.go:172] (0xc00054dc20) (1) Data frame sent\nI0515 00:32:02.774483 2301 log.go:172] (0xc000a5f760) (0xc00054dc20) Stream removed, broadcasting: 1\nI0515 00:32:02.774600 2301 log.go:172] (0xc000a5f760) Go away received\nI0515 00:32:02.774896 2301 log.go:172] (0xc000a5f760) (0xc00054dc20) Stream removed, broadcasting: 1\nI0515 00:32:02.774927 2301 log.go:172] (0xc000a5f760) (0xc000324320) Stream removed, broadcasting: 3\nI0515 00:32:02.774939 2301 log.go:172] (0xc000a5f760) (0xc0001f20a0) Stream removed, broadcasting: 5\n" May 15 00:32:02.779: INFO: stdout: "affinity-clusterip-timeout-m9z2s" May 15 00:32:02.779: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2558, will wait for the garbage collector to delete the pods May 15 00:32:02.887: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 12.809408ms May 15 00:32:03.387: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.250163ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:32:15.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2558" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:52.559 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":150,"skipped":2430,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:32:15.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6962.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6962.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6962.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 00:32:21.514: INFO: DNS probes using dns-test-ee4bad4a-c788-4bf8-955e-648e51148ebc succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6962.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6962.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6962.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 00:32:29.617: INFO: File jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local from pod dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 00:32:29.617: INFO: Lookups using dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb failed for: [jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local] May 15 00:32:34.648: INFO: File wheezy_udp@dns-test-service-3.dns-6962.svc.cluster.local from pod dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 00:32:34.651: INFO: File jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local from pod dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 00:32:34.651: INFO: Lookups using dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb failed for: [wheezy_udp@dns-test-service-3.dns-6962.svc.cluster.local jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local] May 15 00:32:39.627: INFO: File jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local from pod dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 00:32:39.627: INFO: Lookups using dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb failed for: [jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local] May 15 00:32:44.637: INFO: File jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local from pod dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 00:32:44.637: INFO: Lookups using dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb failed for: [jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local] May 15 00:32:49.622: INFO: File wheezy_udp@dns-test-service-3.dns-6962.svc.cluster.local from pod dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 00:32:49.626: INFO: Lookups using dns-6962/dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb failed for: [wheezy_udp@dns-test-service-3.dns-6962.svc.cluster.local] May 15 00:32:54.629: INFO: DNS probes using dns-test-125c054c-b020-4ce3-9a31-5e803fc8a0eb succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6962.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6962.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6962.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6962.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 00:33:03.354: INFO: DNS probes using dns-test-d9c2c391-ad7d-4583-92dd-68029405855e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:33:03.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6962" for this suite. • [SLOW TEST:48.152 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":151,"skipped":2440,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:33:03.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rssz5 in namespace proxy-2629 I0515 00:33:03.866105 7 runners.go:190] Created replication controller with name: proxy-service-rssz5, namespace: proxy-2629, replica count: 1 I0515 00:33:04.916551 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:33:05.916757 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:33:06.916954 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:33:07.917317 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 00:33:08.917515 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 00:33:09.917743 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 00:33:10.917998 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 00:33:11.918158 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 00:33:12.918389 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 00:33:13.919034 7 runners.go:190] proxy-service-rssz5 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:33:13.930: INFO: setup took 10.12670849s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 15 00:33:13.941: INFO: (0) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 10.905572ms) May 15 00:33:13.941: INFO: (0) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 10.985645ms) May 15 00:33:13.941: INFO: (0) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 11.439089ms) May 15 00:33:13.941: INFO: (0) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 11.499415ms) May 15 00:33:13.941: INFO: (0) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 11.618201ms) May 15 00:33:13.944: INFO: (0) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 13.956235ms) May 15 00:33:13.946: INFO: (0) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 16.222247ms) May 15 00:33:13.946: INFO: (0) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 16.243385ms) May 15 00:33:13.946: INFO: (0) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 16.447962ms) May 15 00:33:13.947: INFO: (0) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 16.983249ms) May 15 00:33:13.947: INFO: (0) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 16.868618ms) May 15 00:33:13.954: INFO: (0) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 24.0683ms) May 15 00:33:13.954: INFO: (0) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 24.122734ms) May 15 00:33:13.954: INFO: (0) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 7.221361ms) May 15 00:33:13.962: INFO: (1) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 7.126456ms) May 15 00:33:13.963: INFO: (1) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 8.000832ms) May 15 00:33:13.963: INFO: (1) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 8.031922ms) May 15 00:33:13.963: INFO: (1) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: ... (200; 11.448418ms) May 15 00:33:13.966: INFO: (1) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 11.374773ms) May 15 00:33:13.971: INFO: (2) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 4.867674ms) May 15 00:33:13.971: INFO: (2) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 4.964476ms) May 15 00:33:13.971: INFO: (2) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 5.161939ms) May 15 00:33:13.971: INFO: (2) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 5.06579ms) May 15 00:33:13.972: INFO: (2) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 5.626395ms) May 15 00:33:13.972: INFO: (2) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 5.704206ms) May 15 00:33:13.973: INFO: (2) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 6.748638ms) May 15 00:33:13.973: INFO: (2) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 6.896539ms) May 15 00:33:13.973: INFO: (2) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 6.904852ms) May 15 00:33:13.973: INFO: (2) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 6.952843ms) May 15 00:33:13.973: INFO: (2) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 7.017125ms) May 15 00:33:13.973: INFO: (2) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 6.609662ms) May 15 00:33:13.980: INFO: (3) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 6.666461ms) May 15 00:33:13.980: INFO: (3) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 6.713623ms) May 15 00:33:13.981: INFO: (3) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 6.796819ms) May 15 00:33:13.981: INFO: (3) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: ... (200; 7.008588ms) May 15 00:33:13.981: INFO: (3) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 7.083378ms) May 15 00:33:13.981: INFO: (3) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 7.088726ms) May 15 00:33:13.984: INFO: (4) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 2.542314ms) May 15 00:33:13.984: INFO: (4) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 2.928509ms) May 15 00:33:13.984: INFO: (4) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 2.714645ms) May 15 00:33:13.984: INFO: (4) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 3.135002ms) May 15 00:33:13.985: INFO: (4) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 4.057617ms) May 15 00:33:13.986: INFO: (4) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 4.77803ms) May 15 00:33:13.987: INFO: (4) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 4.720817ms) May 15 00:33:13.987: INFO: (4) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 4.985644ms) May 15 00:33:13.987: INFO: (4) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 5.843048ms) May 15 00:33:13.987: INFO: (4) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 5.014649ms) May 15 00:33:13.989: INFO: (4) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 6.960942ms) May 15 00:33:13.989: INFO: (4) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 7.056428ms) May 15 00:33:13.992: INFO: (5) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 3.170641ms) May 15 00:33:13.992: INFO: (5) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 3.14809ms) May 15 00:33:13.992: INFO: (5) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 3.182706ms) May 15 00:33:13.995: INFO: (5) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.931133ms) May 15 00:33:13.995: INFO: (5) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 6.302039ms) May 15 00:33:13.996: INFO: (5) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 6.55121ms) May 15 00:33:13.996: INFO: (5) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 6.901577ms) May 15 00:33:13.996: INFO: (5) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 7.115986ms) May 15 00:33:13.997: INFO: (5) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 8.119806ms) May 15 00:33:13.997: INFO: (5) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 8.077491ms) May 15 00:33:13.997: INFO: (5) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: ... (200; 4.96054ms) May 15 00:33:14.003: INFO: (6) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 4.966857ms) May 15 00:33:14.003: INFO: (6) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 5.531175ms) May 15 00:33:14.003: INFO: (6) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.529039ms) May 15 00:33:14.003: INFO: (6) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 5.644311ms) May 15 00:33:14.003: INFO: (6) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.487395ms) May 15 00:33:14.004: INFO: (6) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 5.594828ms) May 15 00:33:14.003: INFO: (6) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 5.565863ms) May 15 00:33:14.004: INFO: (6) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 5.627183ms) May 15 00:33:14.004: INFO: (6) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 5.69601ms) May 15 00:33:14.004: INFO: (6) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 4.057524ms) May 15 00:33:14.009: INFO: (7) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 4.597012ms) May 15 00:33:14.010: INFO: (7) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 4.733238ms) May 15 00:33:14.010: INFO: (7) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 4.800509ms) May 15 00:33:14.010: INFO: (7) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 4.863793ms) May 15 00:33:14.010: INFO: (7) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 4.785329ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 5.775689ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 5.826427ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 5.777314ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 5.902493ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 6.029564ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 6.042452ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 6.12751ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 6.063355ms) May 15 00:33:14.011: INFO: (7) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 6.153742ms) May 15 00:33:14.015: INFO: (8) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 3.860904ms) May 15 00:33:14.016: INFO: (8) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 5.327949ms) May 15 00:33:14.017: INFO: (8) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.378016ms) May 15 00:33:14.017: INFO: (8) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 5.470718ms) May 15 00:33:14.017: INFO: (8) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.491293ms) May 15 00:33:14.017: INFO: (8) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 5.411007ms) May 15 00:33:14.017: INFO: (8) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 5.491188ms) May 15 00:33:14.017: INFO: (8) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 5.454603ms) May 15 00:33:14.017: INFO: (8) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 4.358475ms) May 15 00:33:14.023: INFO: (9) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 4.447155ms) May 15 00:33:14.023: INFO: (9) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 4.389734ms) May 15 00:33:14.023: INFO: (9) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 4.446226ms) May 15 00:33:14.023: INFO: (9) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 4.594003ms) May 15 00:33:14.023: INFO: (9) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test<... (200; 5.2435ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.329305ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 5.580419ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 5.534348ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 5.643524ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 5.715448ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 5.838765ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 5.916781ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 5.921945ms) May 15 00:33:14.029: INFO: (10) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 5.929973ms) May 15 00:33:14.030: INFO: (10) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 6.680214ms) May 15 00:33:14.032: INFO: (11) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 2.24307ms) May 15 00:33:14.033: INFO: (11) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 3.640129ms) May 15 00:33:14.034: INFO: (11) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 4.067533ms) May 15 00:33:14.034: INFO: (11) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 4.154557ms) May 15 00:33:14.034: INFO: (11) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 4.268973ms) May 15 00:33:14.034: INFO: (11) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 4.307982ms) May 15 00:33:14.034: INFO: (11) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 4.360232ms) May 15 00:33:14.034: INFO: (11) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 4.339606ms) May 15 00:33:14.035: INFO: (11) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 4.576719ms) May 15 00:33:14.035: INFO: (11) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 4.971013ms) May 15 00:33:14.035: INFO: (11) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 5.165878ms) May 15 00:33:14.035: INFO: (11) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 5.253017ms) May 15 00:33:14.035: INFO: (11) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 5.197813ms) May 15 00:33:14.036: INFO: (11) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 5.564105ms) May 15 00:33:14.036: INFO: (11) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 5.598641ms) May 15 00:33:14.041: INFO: (12) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 5.256553ms) May 15 00:33:14.041: INFO: (12) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.404224ms) May 15 00:33:14.042: INFO: (12) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 5.922494ms) May 15 00:33:14.042: INFO: (12) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 6.004057ms) May 15 00:33:14.042: INFO: (12) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 6.111255ms) May 15 00:33:14.042: INFO: (12) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: ... (200; 6.127383ms) May 15 00:33:14.042: INFO: (12) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 6.15952ms) May 15 00:33:14.042: INFO: (12) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 6.292021ms) May 15 00:33:14.042: INFO: (12) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 6.198235ms) May 15 00:33:14.043: INFO: (12) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 6.837864ms) May 15 00:33:14.043: INFO: (12) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 7.305762ms) May 15 00:33:14.043: INFO: (12) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 7.39866ms) May 15 00:33:14.043: INFO: (12) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 7.430263ms) May 15 00:33:14.044: INFO: (12) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 8.230283ms) May 15 00:33:14.044: INFO: (12) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 8.372112ms) May 15 00:33:14.049: INFO: (13) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 4.216322ms) May 15 00:33:14.049: INFO: (13) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 4.769275ms) May 15 00:33:14.049: INFO: (13) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 4.848597ms) May 15 00:33:14.049: INFO: (13) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 4.815706ms) May 15 00:33:14.049: INFO: (13) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 4.913695ms) May 15 00:33:14.049: INFO: (13) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 4.852704ms) May 15 00:33:14.050: INFO: (13) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 5.830752ms) May 15 00:33:14.050: INFO: (13) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: ... (200; 2.270696ms) May 15 00:33:14.054: INFO: (14) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 2.59972ms) May 15 00:33:14.054: INFO: (14) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 3.030804ms) May 15 00:33:14.055: INFO: (14) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 3.428966ms) May 15 00:33:14.057: INFO: (14) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 5.412704ms) May 15 00:33:14.057: INFO: (14) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 5.059028ms) May 15 00:33:14.057: INFO: (14) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 5.234532ms) May 15 00:33:14.057: INFO: (14) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 5.167967ms) May 15 00:33:14.058: INFO: (14) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 5.524411ms) May 15 00:33:14.058: INFO: (14) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 6.052884ms) May 15 00:33:14.058: INFO: (14) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 5.662238ms) May 15 00:33:14.058: INFO: (14) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test<... (200; 4.213431ms) May 15 00:33:14.064: INFO: (15) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 5.825072ms) May 15 00:33:14.064: INFO: (15) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: ... (200; 7.026367ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 7.589655ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 7.597924ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 7.715666ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 7.678329ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 7.702866ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 7.756945ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 7.939338ms) May 15 00:33:14.066: INFO: (15) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 7.996672ms) May 15 00:33:14.068: INFO: (16) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 1.942361ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 4.339862ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 4.336587ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 4.413575ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 4.585286ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 4.862883ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 5.054064ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 5.172473ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 5.116237ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 5.180599ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 5.235694ms) May 15 00:33:14.071: INFO: (16) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 5.182581ms) May 15 00:33:14.072: INFO: (16) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 5.483188ms) May 15 00:33:14.072: INFO: (16) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 6.83436ms) May 15 00:33:14.079: INFO: (17) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 6.886398ms) May 15 00:33:14.079: INFO: (17) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 7.095418ms) May 15 00:33:14.079: INFO: (17) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 7.059314ms) May 15 00:33:14.079: INFO: (17) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test<... (200; 7.482083ms) May 15 00:33:14.079: INFO: (17) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 7.423316ms) May 15 00:33:14.082: INFO: (18) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 3.072938ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 3.464013ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g/proxy/: test (200; 3.730466ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 3.848155ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 4.025667ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 3.968449ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 3.935495ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 4.007557ms) May 15 00:33:14.083: INFO: (18) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:460/proxy/: tls baz (200; 3.993976ms) May 15 00:33:14.084: INFO: (18) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:443/proxy/: test (200; 3.053218ms) May 15 00:33:14.135: INFO: (19) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 50.146594ms) May 15 00:33:14.135: INFO: (19) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname2/proxy/: bar (200; 50.167363ms) May 15 00:33:14.135: INFO: (19) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname2/proxy/: bar (200; 50.264214ms) May 15 00:33:14.136: INFO: (19) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:162/proxy/: bar (200; 50.54274ms) May 15 00:33:14.136: INFO: (19) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:1080/proxy/: ... (200; 50.738631ms) May 15 00:33:14.136: INFO: (19) /api/v1/namespaces/proxy-2629/pods/proxy-service-rssz5-tkt8g:1080/proxy/: test<... (200; 50.755682ms) May 15 00:33:14.136: INFO: (19) /api/v1/namespaces/proxy-2629/services/http:proxy-service-rssz5:portname1/proxy/: foo (200; 50.770919ms) May 15 00:33:14.136: INFO: (19) /api/v1/namespaces/proxy-2629/services/proxy-service-rssz5:portname1/proxy/: foo (200; 51.113541ms) May 15 00:33:14.137: INFO: (19) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname2/proxy/: tls qux (200; 51.519763ms) May 15 00:33:14.137: INFO: (19) /api/v1/namespaces/proxy-2629/pods/https:proxy-service-rssz5-tkt8g:462/proxy/: tls qux (200; 51.606508ms) May 15 00:33:14.137: INFO: (19) /api/v1/namespaces/proxy-2629/pods/http:proxy-service-rssz5-tkt8g:160/proxy/: foo (200; 51.554538ms) May 15 00:33:14.138: INFO: (19) /api/v1/namespaces/proxy-2629/services/https:proxy-service-rssz5:tlsportname1/proxy/: tls baz (200; 52.352749ms) STEP: deleting ReplicationController proxy-service-rssz5 in namespace proxy-2629, will wait for the garbage collector to delete the pods May 15 00:33:14.198: INFO: Deleting ReplicationController proxy-service-rssz5 took: 7.185335ms May 15 00:33:14.598: INFO: Terminating ReplicationController proxy-service-rssz5 pods took: 400.184319ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:33:17.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2629" for this suite. • [SLOW TEST:13.839 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":152,"skipped":2456,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:33:17.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 15 00:33:22.538: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:33:22.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4873" for this suite. • [SLOW TEST:5.283 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":153,"skipped":2457,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:33:22.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-7tm2 STEP: Creating a pod to test atomic-volume-subpath May 15 00:33:22.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7tm2" in namespace "subpath-2686" to be "Succeeded or Failed" May 15 00:33:22.918: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.030163ms May 15 00:33:24.943: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060410372s May 15 00:33:26.949: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066665915s May 15 00:33:28.953: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 6.070584911s May 15 00:33:30.958: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 8.075477994s May 15 00:33:32.962: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 10.07955745s May 15 00:33:35.032: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 12.150028777s May 15 00:33:37.036: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 14.153796811s May 15 00:33:39.193: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 16.311365827s May 15 00:33:41.198: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 18.316080799s May 15 00:33:43.203: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 20.321271617s May 15 00:33:45.207: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 22.325050377s May 15 00:33:47.230: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Running", Reason="", readiness=true. Elapsed: 24.347675401s May 15 00:33:49.234: INFO: Pod "pod-subpath-test-projected-7tm2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.351563816s STEP: Saw pod success May 15 00:33:49.234: INFO: Pod "pod-subpath-test-projected-7tm2" satisfied condition "Succeeded or Failed" May 15 00:33:49.236: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-7tm2 container test-container-subpath-projected-7tm2: STEP: delete the pod May 15 00:33:49.284: INFO: Waiting for pod pod-subpath-test-projected-7tm2 to disappear May 15 00:33:49.294: INFO: Pod pod-subpath-test-projected-7tm2 no longer exists STEP: Deleting pod pod-subpath-test-projected-7tm2 May 15 00:33:49.294: INFO: Deleting pod "pod-subpath-test-projected-7tm2" in namespace "subpath-2686" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:33:49.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2686" for this suite. • [SLOW TEST:26.640 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":154,"skipped":2458,"failed":0} [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:33:49.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6929 STEP: creating service affinity-clusterip in namespace services-6929 STEP: creating replication controller affinity-clusterip in namespace services-6929 I0515 00:33:49.482305 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-6929, replica count: 3 I0515 00:33:52.532689 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:33:55.532901 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:33:55.552: INFO: Creating new exec pod May 15 00:34:00.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6929 execpod-affinityws4pv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 15 00:34:00.825: INFO: stderr: "I0515 00:34:00.710838 2321 log.go:172] (0xc0005920b0) (0xc00066a140) Create stream\nI0515 00:34:00.710898 2321 log.go:172] (0xc0005920b0) (0xc00066a140) Stream added, broadcasting: 1\nI0515 00:34:00.712362 2321 log.go:172] (0xc0005920b0) Reply frame received for 1\nI0515 00:34:00.712403 2321 log.go:172] (0xc0005920b0) (0xc00066a640) Create stream\nI0515 00:34:00.712416 2321 log.go:172] (0xc0005920b0) (0xc00066a640) Stream added, broadcasting: 3\nI0515 00:34:00.713385 2321 log.go:172] (0xc0005920b0) Reply frame received for 3\nI0515 00:34:00.713443 2321 log.go:172] (0xc0005920b0) (0xc00065c3c0) Create stream\nI0515 00:34:00.713464 2321 log.go:172] (0xc0005920b0) (0xc00065c3c0) Stream added, broadcasting: 5\nI0515 00:34:00.714562 2321 log.go:172] (0xc0005920b0) Reply frame received for 5\nI0515 00:34:00.816702 2321 log.go:172] (0xc0005920b0) Data frame received for 5\nI0515 00:34:00.816747 2321 log.go:172] (0xc00065c3c0) (5) Data frame handling\nI0515 00:34:00.816787 2321 log.go:172] (0xc00065c3c0) (5) Data frame sent\nI0515 00:34:00.816808 2321 log.go:172] (0xc0005920b0) Data frame received for 5\nI0515 00:34:00.816825 2321 log.go:172] (0xc00065c3c0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0515 00:34:00.816887 2321 log.go:172] (0xc00065c3c0) (5) Data frame sent\nI0515 00:34:00.816919 2321 log.go:172] (0xc0005920b0) Data frame received for 5\nI0515 00:34:00.816938 2321 log.go:172] (0xc00065c3c0) (5) Data frame handling\nI0515 00:34:00.817311 2321 log.go:172] (0xc0005920b0) Data frame received for 3\nI0515 00:34:00.817342 2321 log.go:172] (0xc00066a640) (3) Data frame handling\nI0515 00:34:00.819435 2321 log.go:172] (0xc0005920b0) Data frame received for 1\nI0515 00:34:00.819456 2321 log.go:172] (0xc00066a140) (1) Data frame handling\nI0515 00:34:00.819470 2321 log.go:172] (0xc00066a140) (1) Data frame sent\nI0515 00:34:00.819496 2321 log.go:172] (0xc0005920b0) (0xc00066a140) Stream removed, broadcasting: 1\nI0515 00:34:00.820075 2321 log.go:172] (0xc0005920b0) (0xc00066a140) Stream removed, broadcasting: 1\nI0515 00:34:00.820105 2321 log.go:172] (0xc0005920b0) (0xc00066a640) Stream removed, broadcasting: 3\nI0515 00:34:00.820355 2321 log.go:172] (0xc0005920b0) (0xc00065c3c0) Stream removed, broadcasting: 5\n" May 15 00:34:00.826: INFO: stdout: "" May 15 00:34:00.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6929 execpod-affinityws4pv -- /bin/sh -x -c nc -zv -t -w 2 10.99.118.102 80' May 15 00:34:01.024: INFO: stderr: "I0515 00:34:00.955052 2338 log.go:172] (0xc0009db290) (0xc000b283c0) Create stream\nI0515 00:34:00.955095 2338 log.go:172] (0xc0009db290) (0xc000b283c0) Stream added, broadcasting: 1\nI0515 00:34:00.959722 2338 log.go:172] (0xc0009db290) Reply frame received for 1\nI0515 00:34:00.959769 2338 log.go:172] (0xc0009db290) (0xc0003bec80) Create stream\nI0515 00:34:00.959797 2338 log.go:172] (0xc0009db290) (0xc0003bec80) Stream added, broadcasting: 3\nI0515 00:34:00.960593 2338 log.go:172] (0xc0009db290) Reply frame received for 3\nI0515 00:34:00.960624 2338 log.go:172] (0xc0009db290) (0xc00055c780) Create stream\nI0515 00:34:00.960637 2338 log.go:172] (0xc0009db290) (0xc00055c780) Stream added, broadcasting: 5\nI0515 00:34:00.961511 2338 log.go:172] (0xc0009db290) Reply frame received for 5\nI0515 00:34:01.016955 2338 log.go:172] (0xc0009db290) Data frame received for 3\nI0515 00:34:01.017002 2338 log.go:172] (0xc0003bec80) (3) Data frame handling\nI0515 00:34:01.017040 2338 log.go:172] (0xc0009db290) Data frame received for 5\nI0515 00:34:01.017290 2338 log.go:172] (0xc00055c780) (5) Data frame handling\nI0515 00:34:01.017374 2338 log.go:172] (0xc00055c780) (5) Data frame sent\nI0515 00:34:01.017399 2338 log.go:172] (0xc0009db290) Data frame received for 5\nI0515 00:34:01.017413 2338 log.go:172] (0xc00055c780) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.118.102 80\nConnection to 10.99.118.102 80 port [tcp/http] succeeded!\nI0515 00:34:01.018681 2338 log.go:172] (0xc0009db290) Data frame received for 1\nI0515 00:34:01.018704 2338 log.go:172] (0xc000b283c0) (1) Data frame handling\nI0515 00:34:01.018725 2338 log.go:172] (0xc000b283c0) (1) Data frame sent\nI0515 00:34:01.018741 2338 log.go:172] (0xc0009db290) (0xc000b283c0) Stream removed, broadcasting: 1\nI0515 00:34:01.018754 2338 log.go:172] (0xc0009db290) Go away received\nI0515 00:34:01.019273 2338 log.go:172] (0xc0009db290) (0xc000b283c0) Stream removed, broadcasting: 1\nI0515 00:34:01.019309 2338 log.go:172] (0xc0009db290) (0xc0003bec80) Stream removed, broadcasting: 3\nI0515 00:34:01.019332 2338 log.go:172] (0xc0009db290) (0xc00055c780) Stream removed, broadcasting: 5\n" May 15 00:34:01.024: INFO: stdout: "" May 15 00:34:01.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6929 execpod-affinityws4pv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.99.118.102:80/ ; done' May 15 00:34:01.335: INFO: stderr: "I0515 00:34:01.147611 2358 log.go:172] (0xc000a05080) (0xc000bd26e0) Create stream\nI0515 00:34:01.147664 2358 log.go:172] (0xc000a05080) (0xc000bd26e0) Stream added, broadcasting: 1\nI0515 00:34:01.154552 2358 log.go:172] (0xc000a05080) Reply frame received for 1\nI0515 00:34:01.154599 2358 log.go:172] (0xc000a05080) (0xc0004bcbe0) Create stream\nI0515 00:34:01.154611 2358 log.go:172] (0xc000a05080) (0xc0004bcbe0) Stream added, broadcasting: 3\nI0515 00:34:01.155456 2358 log.go:172] (0xc000a05080) Reply frame received for 3\nI0515 00:34:01.155490 2358 log.go:172] (0xc000a05080) (0xc000326960) Create stream\nI0515 00:34:01.155501 2358 log.go:172] (0xc000a05080) (0xc000326960) Stream added, broadcasting: 5\nI0515 00:34:01.156136 2358 log.go:172] (0xc000a05080) Reply frame received for 5\nI0515 00:34:01.245862 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.245883 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.245892 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.245910 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.245916 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.245923 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.249773 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.249819 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.249843 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.250151 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.250164 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.250178 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.250186 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.250198 2358 log.go:172] (0xc000326960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.250216 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.250230 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.250241 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.250254 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.253659 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.253688 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.253719 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.254074 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.254096 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.254105 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.254124 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.254133 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.254145 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.257253 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.257281 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.257295 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.257708 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.257721 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.257731 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.257745 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.257758 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.257766 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.257773 2358 log.go:172] (0xc000326960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.257790 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.257800 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.262772 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.262785 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.262796 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.263250 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.263269 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.263278 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.263289 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.263295 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.263303 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.263310 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.263316 2358 log.go:172] (0xc000326960) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.263330 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.267065 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.267095 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.267123 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.267465 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.267495 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.267507 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.267525 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.267539 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.267556 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.271041 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.271055 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.271068 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.271480 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.271527 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.271542 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.271554 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.271561 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.271568 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.275448 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.275465 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.275484 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.275778 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.275800 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.275817 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.275830 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.275840 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.275860 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.279214 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.279244 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.279272 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.279609 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.279635 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.279653 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.279675 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.279685 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.279696 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.283208 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.283227 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.283250 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.283701 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.283744 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.283767 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.283800 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.283823 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.283851 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.291022 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.291077 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.291102 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.291122 2358 log.go:172] (0xc000a05080) Data frame received for 5\n+ I0515 00:34:01.291151 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.291189 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.291218 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.291236 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.291261 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.291282 2358 log.go:172] (0xc000326960) (5) Data frame sent\necho\n+ curlI0515 00:34:01.291299 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.291358 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.291376 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.291385 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.291394 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.291407 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.291421 2358 log.go:172] (0xc000326960) (5) Data frame handling\n -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.291441 2358 log.go:172] (0xc000326960) (5) Data frame sent\nI0515 00:34:01.294639 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.294662 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.294677 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.295015 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.295034 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.295059 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.295073 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.295085 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.295092 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.298352 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.298367 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.298375 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.298777 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.298805 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.298827 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.298858 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.298885 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.298913 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -qI0515 00:34:01.298932 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.298996 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.299017 2358 log.go:172] (0xc000326960) (5) Data frame sent\n -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.304723 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.304755 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.304781 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.305560 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.305578 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.305587 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.305625 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.305696 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.305719 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.308704 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.308732 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.308756 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.308899 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.308912 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.308920 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.308932 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.308938 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.308944 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.314722 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.314752 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.314777 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.315135 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.315164 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.315185 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.315205 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.315220 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.315249 2358 log.go:172] (0xc000326960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.118.102:80/\nI0515 00:34:01.321712 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.321741 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.321762 2358 log.go:172] (0xc0004bcbe0) (3) Data frame sent\nI0515 00:34:01.322248 2358 log.go:172] (0xc000a05080) Data frame received for 5\nI0515 00:34:01.322262 2358 log.go:172] (0xc000326960) (5) Data frame handling\nI0515 00:34:01.322288 2358 log.go:172] (0xc000a05080) Data frame received for 3\nI0515 00:34:01.322317 2358 log.go:172] (0xc0004bcbe0) (3) Data frame handling\nI0515 00:34:01.331746 2358 log.go:172] (0xc000a05080) Data frame received for 1\nI0515 00:34:01.331770 2358 log.go:172] (0xc000bd26e0) (1) Data frame handling\nI0515 00:34:01.331785 2358 log.go:172] (0xc000bd26e0) (1) Data frame sent\nI0515 00:34:01.331796 2358 log.go:172] (0xc000a05080) (0xc000bd26e0) Stream removed, broadcasting: 1\nI0515 00:34:01.331809 2358 log.go:172] (0xc000a05080) Go away received\nI0515 00:34:01.332039 2358 log.go:172] (0xc000a05080) (0xc000bd26e0) Stream removed, broadcasting: 1\nI0515 00:34:01.332055 2358 log.go:172] (0xc000a05080) (0xc0004bcbe0) Stream removed, broadcasting: 3\nI0515 00:34:01.332066 2358 log.go:172] (0xc000a05080) (0xc000326960) Stream removed, broadcasting: 5\n" May 15 00:34:01.336: INFO: stdout: "\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv\naffinity-clusterip-7gvhv" May 15 00:34:01.336: INFO: Received response from host: May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Received response from host: affinity-clusterip-7gvhv May 15 00:34:01.336: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-6929, will wait for the garbage collector to delete the pods May 15 00:34:01.442: INFO: Deleting ReplicationController affinity-clusterip took: 4.730745ms May 15 00:34:02.042: INFO: Terminating ReplicationController affinity-clusterip pods took: 600.299544ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:34:15.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6929" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.264 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":155,"skipped":2458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:34:15.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-f63ce672-35ab-4b25-9936-8cded841b55e STEP: Creating a pod to test consume configMaps May 15 00:34:15.746: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a" in namespace "projected-7145" to be "Succeeded or Failed" May 15 00:34:15.750: INFO: Pod "pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.628941ms May 15 00:34:17.754: INFO: Pod "pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007333253s May 15 00:34:19.835: INFO: Pod "pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088441741s May 15 00:34:21.859: INFO: Pod "pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11201238s STEP: Saw pod success May 15 00:34:21.859: INFO: Pod "pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a" satisfied condition "Succeeded or Failed" May 15 00:34:21.861: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a container projected-configmap-volume-test: STEP: delete the pod May 15 00:34:21.914: INFO: Waiting for pod pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a to disappear May 15 00:34:21.920: INFO: Pod pod-projected-configmaps-a91d7ba5-031b-4a61-bcb4-39f0f381a39a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:34:21.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7145" for this suite. • [SLOW TEST:6.332 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:34:21.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:34:28.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1353" for this suite. STEP: Destroying namespace "nsdeletetest-779" for this suite. May 15 00:34:28.250: INFO: Namespace nsdeletetest-779 was already deleted STEP: Destroying namespace "nsdeletetest-2709" for this suite. • [SLOW TEST:6.327 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":157,"skipped":2508,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:34:28.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-02dc9c75-ac4e-4a37-a805-1643ec8ff2e1 STEP: Creating secret with name s-test-opt-upd-ce423f66-158d-409b-85d2-ea07f8e8c40f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-02dc9c75-ac4e-4a37-a805-1643ec8ff2e1 STEP: Updating secret s-test-opt-upd-ce423f66-158d-409b-85d2-ea07f8e8c40f STEP: Creating secret with name s-test-opt-create-efbb0d37-576b-4f8a-9197-eec7fe43a2c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:35:53.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6938" for this suite. • [SLOW TEST:85.425 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:35:53.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d4135105-3c58-4e69-b19c-aac2bb89825a STEP: Creating a pod to test consume secrets May 15 00:35:53.802: INFO: Waiting up to 5m0s for pod "pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b" in namespace "secrets-2036" to be "Succeeded or Failed" May 15 00:35:53.819: INFO: Pod "pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.590005ms May 15 00:35:55.908: INFO: Pod "pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106097118s May 15 00:35:57.911: INFO: Pod "pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b": Phase="Running", Reason="", readiness=true. Elapsed: 4.109039605s May 15 00:35:59.915: INFO: Pod "pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113146512s STEP: Saw pod success May 15 00:35:59.915: INFO: Pod "pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b" satisfied condition "Succeeded or Failed" May 15 00:35:59.918: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b container secret-volume-test: STEP: delete the pod May 15 00:36:00.227: INFO: Waiting for pod pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b to disappear May 15 00:36:00.294: INFO: Pod pod-secrets-27d8bfde-c90e-485d-bbd6-8bbb2104406b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:36:00.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2036" for this suite. • [SLOW TEST:6.621 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2563,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:36:00.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9282 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9282 I0515 00:36:01.174517 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9282, replica count: 2 I0515 00:36:04.224891 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:36:07.225271 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:36:07.225: INFO: Creating new exec pod May 15 00:36:12.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9282 execpod9phqd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 15 00:36:16.436: INFO: stderr: "I0515 00:36:16.340576 2379 log.go:172] (0xc00003a420) (0xc0006f5680) Create stream\nI0515 00:36:16.340610 2379 log.go:172] (0xc00003a420) (0xc0006f5680) Stream added, broadcasting: 1\nI0515 00:36:16.342517 2379 log.go:172] (0xc00003a420) Reply frame received for 1\nI0515 00:36:16.342553 2379 log.go:172] (0xc00003a420) (0xc0006e4be0) Create stream\nI0515 00:36:16.342562 2379 log.go:172] (0xc00003a420) (0xc0006e4be0) Stream added, broadcasting: 3\nI0515 00:36:16.343432 2379 log.go:172] (0xc00003a420) Reply frame received for 3\nI0515 00:36:16.343462 2379 log.go:172] (0xc00003a420) (0xc0006dae60) Create stream\nI0515 00:36:16.343471 2379 log.go:172] (0xc00003a420) (0xc0006dae60) Stream added, broadcasting: 5\nI0515 00:36:16.344377 2379 log.go:172] (0xc00003a420) Reply frame received for 5\nI0515 00:36:16.428148 2379 log.go:172] (0xc00003a420) Data frame received for 5\nI0515 00:36:16.428176 2379 log.go:172] (0xc0006dae60) (5) Data frame handling\nI0515 00:36:16.428198 2379 log.go:172] (0xc0006dae60) (5) Data frame sent\nI0515 00:36:16.428209 2379 log.go:172] (0xc00003a420) Data frame received for 5\nI0515 00:36:16.428216 2379 log.go:172] (0xc0006dae60) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0515 00:36:16.428282 2379 log.go:172] (0xc00003a420) Data frame received for 3\nI0515 00:36:16.428294 2379 log.go:172] (0xc0006e4be0) (3) Data frame handling\nI0515 00:36:16.429922 2379 log.go:172] (0xc00003a420) Data frame received for 1\nI0515 00:36:16.429941 2379 log.go:172] (0xc0006f5680) (1) Data frame handling\nI0515 00:36:16.429956 2379 log.go:172] (0xc0006f5680) (1) Data frame sent\nI0515 00:36:16.430020 2379 log.go:172] (0xc00003a420) (0xc0006f5680) Stream removed, broadcasting: 1\nI0515 00:36:16.430125 2379 log.go:172] (0xc00003a420) Go away received\nI0515 00:36:16.430493 2379 log.go:172] (0xc00003a420) (0xc0006f5680) Stream removed, broadcasting: 1\nI0515 00:36:16.430520 2379 log.go:172] (0xc00003a420) (0xc0006e4be0) Stream removed, broadcasting: 3\nI0515 00:36:16.430534 2379 log.go:172] (0xc00003a420) (0xc0006dae60) Stream removed, broadcasting: 5\n" May 15 00:36:16.436: INFO: stdout: "" May 15 00:36:16.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9282 execpod9phqd -- /bin/sh -x -c nc -zv -t -w 2 10.107.240.128 80' May 15 00:36:16.652: INFO: stderr: "I0515 00:36:16.579988 2409 log.go:172] (0xc00099b290) (0xc000307680) Create stream\nI0515 00:36:16.580069 2409 log.go:172] (0xc00099b290) (0xc000307680) Stream added, broadcasting: 1\nI0515 00:36:16.582505 2409 log.go:172] (0xc00099b290) Reply frame received for 1\nI0515 00:36:16.582550 2409 log.go:172] (0xc00099b290) (0xc000a98280) Create stream\nI0515 00:36:16.582570 2409 log.go:172] (0xc00099b290) (0xc000a98280) Stream added, broadcasting: 3\nI0515 00:36:16.583548 2409 log.go:172] (0xc00099b290) Reply frame received for 3\nI0515 00:36:16.583596 2409 log.go:172] (0xc00099b290) (0xc000307900) Create stream\nI0515 00:36:16.583617 2409 log.go:172] (0xc00099b290) (0xc000307900) Stream added, broadcasting: 5\nI0515 00:36:16.584539 2409 log.go:172] (0xc00099b290) Reply frame received for 5\nI0515 00:36:16.646118 2409 log.go:172] (0xc00099b290) Data frame received for 3\nI0515 00:36:16.646165 2409 log.go:172] (0xc000a98280) (3) Data frame handling\nI0515 00:36:16.646196 2409 log.go:172] (0xc00099b290) Data frame received for 5\nI0515 00:36:16.646222 2409 log.go:172] (0xc000307900) (5) Data frame handling\nI0515 00:36:16.646233 2409 log.go:172] (0xc000307900) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.240.128 80\nConnection to 10.107.240.128 80 port [tcp/http] succeeded!\nI0515 00:36:16.646305 2409 log.go:172] (0xc00099b290) Data frame received for 5\nI0515 00:36:16.646334 2409 log.go:172] (0xc000307900) (5) Data frame handling\nI0515 00:36:16.647947 2409 log.go:172] (0xc00099b290) Data frame received for 1\nI0515 00:36:16.648008 2409 log.go:172] (0xc000307680) (1) Data frame handling\nI0515 00:36:16.648031 2409 log.go:172] (0xc000307680) (1) Data frame sent\nI0515 00:36:16.648059 2409 log.go:172] (0xc00099b290) (0xc000307680) Stream removed, broadcasting: 1\nI0515 00:36:16.648109 2409 log.go:172] (0xc00099b290) Go away received\nI0515 00:36:16.648361 2409 log.go:172] (0xc00099b290) (0xc000307680) Stream removed, broadcasting: 1\nI0515 00:36:16.648381 2409 log.go:172] (0xc00099b290) (0xc000a98280) Stream removed, broadcasting: 3\nI0515 00:36:16.648389 2409 log.go:172] (0xc00099b290) (0xc000307900) Stream removed, broadcasting: 5\n" May 15 00:36:16.652: INFO: stdout: "" May 15 00:36:16.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9282 execpod9phqd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31690' May 15 00:36:16.852: INFO: stderr: "I0515 00:36:16.780514 2429 log.go:172] (0xc0009b9810) (0xc000b9a320) Create stream\nI0515 00:36:16.780566 2429 log.go:172] (0xc0009b9810) (0xc000b9a320) Stream added, broadcasting: 1\nI0515 00:36:16.786062 2429 log.go:172] (0xc0009b9810) Reply frame received for 1\nI0515 00:36:16.786119 2429 log.go:172] (0xc0009b9810) (0xc000658320) Create stream\nI0515 00:36:16.786141 2429 log.go:172] (0xc0009b9810) (0xc000658320) Stream added, broadcasting: 3\nI0515 00:36:16.787050 2429 log.go:172] (0xc0009b9810) Reply frame received for 3\nI0515 00:36:16.787077 2429 log.go:172] (0xc0009b9810) (0xc00052ee60) Create stream\nI0515 00:36:16.787086 2429 log.go:172] (0xc0009b9810) (0xc00052ee60) Stream added, broadcasting: 5\nI0515 00:36:16.787911 2429 log.go:172] (0xc0009b9810) Reply frame received for 5\nI0515 00:36:16.844433 2429 log.go:172] (0xc0009b9810) Data frame received for 3\nI0515 00:36:16.844478 2429 log.go:172] (0xc000658320) (3) Data frame handling\nI0515 00:36:16.844511 2429 log.go:172] (0xc0009b9810) Data frame received for 5\nI0515 00:36:16.844524 2429 log.go:172] (0xc00052ee60) (5) Data frame handling\nI0515 00:36:16.844540 2429 log.go:172] (0xc00052ee60) (5) Data frame sent\nI0515 00:36:16.844554 2429 log.go:172] (0xc0009b9810) Data frame received for 5\nI0515 00:36:16.844577 2429 log.go:172] (0xc00052ee60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31690\nConnection to 172.17.0.13 31690 port [tcp/31690] succeeded!\nI0515 00:36:16.846299 2429 log.go:172] (0xc0009b9810) Data frame received for 1\nI0515 00:36:16.846369 2429 log.go:172] (0xc000b9a320) (1) Data frame handling\nI0515 00:36:16.846424 2429 log.go:172] (0xc000b9a320) (1) Data frame sent\nI0515 00:36:16.846480 2429 log.go:172] (0xc0009b9810) (0xc000b9a320) Stream removed, broadcasting: 1\nI0515 00:36:16.846504 2429 log.go:172] (0xc0009b9810) Go away received\nI0515 00:36:16.846900 2429 log.go:172] (0xc0009b9810) (0xc000b9a320) Stream removed, broadcasting: 1\nI0515 00:36:16.846929 2429 log.go:172] (0xc0009b9810) (0xc000658320) Stream removed, broadcasting: 3\nI0515 00:36:16.846943 2429 log.go:172] (0xc0009b9810) (0xc00052ee60) Stream removed, broadcasting: 5\n" May 15 00:36:16.852: INFO: stdout: "" May 15 00:36:16.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9282 execpod9phqd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31690' May 15 00:36:17.074: INFO: stderr: "I0515 00:36:16.995815 2451 log.go:172] (0xc000988fd0) (0xc000ad23c0) Create stream\nI0515 00:36:16.995877 2451 log.go:172] (0xc000988fd0) (0xc000ad23c0) Stream added, broadcasting: 1\nI0515 00:36:17.000201 2451 log.go:172] (0xc000988fd0) Reply frame received for 1\nI0515 00:36:17.000241 2451 log.go:172] (0xc000988fd0) (0xc000846640) Create stream\nI0515 00:36:17.000253 2451 log.go:172] (0xc000988fd0) (0xc000846640) Stream added, broadcasting: 3\nI0515 00:36:17.001232 2451 log.go:172] (0xc000988fd0) Reply frame received for 3\nI0515 00:36:17.001257 2451 log.go:172] (0xc000988fd0) (0xc00082e5a0) Create stream\nI0515 00:36:17.001265 2451 log.go:172] (0xc000988fd0) (0xc00082e5a0) Stream added, broadcasting: 5\nI0515 00:36:17.002147 2451 log.go:172] (0xc000988fd0) Reply frame received for 5\nI0515 00:36:17.067786 2451 log.go:172] (0xc000988fd0) Data frame received for 3\nI0515 00:36:17.067814 2451 log.go:172] (0xc000846640) (3) Data frame handling\nI0515 00:36:17.067839 2451 log.go:172] (0xc000988fd0) Data frame received for 5\nI0515 00:36:17.067858 2451 log.go:172] (0xc00082e5a0) (5) Data frame handling\nI0515 00:36:17.067876 2451 log.go:172] (0xc00082e5a0) (5) Data frame sent\nI0515 00:36:17.067890 2451 log.go:172] (0xc000988fd0) Data frame received for 5\nI0515 00:36:17.067903 2451 log.go:172] (0xc00082e5a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31690\nConnection to 172.17.0.12 31690 port [tcp/31690] succeeded!\nI0515 00:36:17.069273 2451 log.go:172] (0xc000988fd0) Data frame received for 1\nI0515 00:36:17.069296 2451 log.go:172] (0xc000ad23c0) (1) Data frame handling\nI0515 00:36:17.069311 2451 log.go:172] (0xc000ad23c0) (1) Data frame sent\nI0515 00:36:17.069334 2451 log.go:172] (0xc000988fd0) (0xc000ad23c0) Stream removed, broadcasting: 1\nI0515 00:36:17.069374 2451 log.go:172] (0xc000988fd0) Go away received\nI0515 00:36:17.069717 2451 log.go:172] (0xc000988fd0) (0xc000ad23c0) Stream removed, broadcasting: 1\nI0515 00:36:17.069734 2451 log.go:172] (0xc000988fd0) (0xc000846640) Stream removed, broadcasting: 3\nI0515 00:36:17.069746 2451 log.go:172] (0xc000988fd0) (0xc00082e5a0) Stream removed, broadcasting: 5\n" May 15 00:36:17.074: INFO: stdout: "" May 15 00:36:17.074: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:36:17.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9282" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:16.883 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":160,"skipped":2573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:36:17.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 15 00:36:17.235: INFO: Waiting up to 5m0s for pod "downward-api-c2d871cc-f277-451d-b628-7a6360db4465" in namespace "downward-api-6036" to be "Succeeded or Failed" May 15 00:36:17.303: INFO: Pod "downward-api-c2d871cc-f277-451d-b628-7a6360db4465": Phase="Pending", Reason="", readiness=false. Elapsed: 67.500065ms May 15 00:36:19.307: INFO: Pod "downward-api-c2d871cc-f277-451d-b628-7a6360db4465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071601764s May 15 00:36:21.312: INFO: Pod "downward-api-c2d871cc-f277-451d-b628-7a6360db4465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076422205s STEP: Saw pod success May 15 00:36:21.312: INFO: Pod "downward-api-c2d871cc-f277-451d-b628-7a6360db4465" satisfied condition "Succeeded or Failed" May 15 00:36:21.316: INFO: Trying to get logs from node latest-worker2 pod downward-api-c2d871cc-f277-451d-b628-7a6360db4465 container dapi-container: STEP: delete the pod May 15 00:36:21.349: INFO: Waiting for pod downward-api-c2d871cc-f277-451d-b628-7a6360db4465 to disappear May 15 00:36:21.356: INFO: Pod downward-api-c2d871cc-f277-451d-b628-7a6360db4465 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:36:21.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6036" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2599,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:36:21.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-70a77fc1-faaf-46cf-b587-cc17df4eb5a7 STEP: Creating a pod to test consume configMaps May 15 00:36:21.468: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa" in namespace "configmap-4361" to be "Succeeded or Failed" May 15 00:36:21.506: INFO: Pod "pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 38.311037ms May 15 00:36:23.603: INFO: Pod "pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134460598s May 15 00:36:25.658: INFO: Pod "pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa": Phase="Running", Reason="", readiness=true. Elapsed: 4.189922147s May 15 00:36:27.662: INFO: Pod "pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194010791s STEP: Saw pod success May 15 00:36:27.662: INFO: Pod "pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa" satisfied condition "Succeeded or Failed" May 15 00:36:27.665: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa container configmap-volume-test: STEP: delete the pod May 15 00:36:27.715: INFO: Waiting for pod pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa to disappear May 15 00:36:27.725: INFO: Pod pod-configmaps-5a666bc7-7ca5-47d3-8b2c-e1c404b0a5aa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:36:27.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4361" for this suite. • [SLOW TEST:6.344 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2608,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:36:27.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:36:28.115: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"41594a44-6631-485d-b255-bd56276d824e", Controller:(*bool)(0xc003e7d1e2), BlockOwnerDeletion:(*bool)(0xc003e7d1e3)}} May 15 00:36:28.127: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"84a4da1f-28e4-4b15-a6bb-5c2ebc35be5d", Controller:(*bool)(0xc0026550a2), BlockOwnerDeletion:(*bool)(0xc0026550a3)}} May 15 00:36:28.197: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"22377ce4-9504-4450-a922-ca90800df922", Controller:(*bool)(0xc003e7d3ca), BlockOwnerDeletion:(*bool)(0xc003e7d3cb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:36:33.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9334" for this suite. • [SLOW TEST:5.540 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":163,"skipped":2609,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:36:33.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 15 00:36:33.363: INFO: Waiting up to 5m0s for pod "var-expansion-762bac70-2521-479c-86bc-e78d9048985d" in namespace "var-expansion-718" to be "Succeeded or Failed" May 15 00:36:33.379: INFO: Pod "var-expansion-762bac70-2521-479c-86bc-e78d9048985d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.319711ms May 15 00:36:35.383: INFO: Pod "var-expansion-762bac70-2521-479c-86bc-e78d9048985d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019519824s May 15 00:36:37.387: INFO: Pod "var-expansion-762bac70-2521-479c-86bc-e78d9048985d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02388214s STEP: Saw pod success May 15 00:36:37.387: INFO: Pod "var-expansion-762bac70-2521-479c-86bc-e78d9048985d" satisfied condition "Succeeded or Failed" May 15 00:36:37.391: INFO: Trying to get logs from node latest-worker pod var-expansion-762bac70-2521-479c-86bc-e78d9048985d container dapi-container: STEP: delete the pod May 15 00:36:37.424: INFO: Waiting for pod var-expansion-762bac70-2521-479c-86bc-e78d9048985d to disappear May 15 00:36:37.458: INFO: Pod var-expansion-762bac70-2521-479c-86bc-e78d9048985d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:36:37.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-718" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":164,"skipped":2616,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:36:37.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:36:37.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c" in namespace "projected-6808" to be "Succeeded or Failed" May 15 00:36:37.591: INFO: Pod "downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c": Phase="Pending", Reason="", readiness=false. Elapsed: 61.397306ms May 15 00:36:39.717: INFO: Pod "downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186841312s May 15 00:36:41.735: INFO: Pod "downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205667293s STEP: Saw pod success May 15 00:36:41.735: INFO: Pod "downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c" satisfied condition "Succeeded or Failed" May 15 00:36:41.770: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c container client-container: STEP: delete the pod May 15 00:36:41.803: INFO: Waiting for pod downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c to disappear May 15 00:36:41.810: INFO: Pod downwardapi-volume-bc200414-fd67-4fa4-8596-f7ab3079f28c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:36:41.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6808" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:36:41.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5742 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5742 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5742 May 15 00:36:42.218: INFO: Found 0 stateful pods, waiting for 1 May 15 00:36:52.222: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 15 00:36:52.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:36:52.506: INFO: stderr: "I0515 00:36:52.354948 2472 log.go:172] (0xc000a360b0) (0xc0005a0fa0) Create stream\nI0515 00:36:52.354991 2472 log.go:172] (0xc000a360b0) (0xc0005a0fa0) Stream added, broadcasting: 1\nI0515 00:36:52.357337 2472 log.go:172] (0xc000a360b0) Reply frame received for 1\nI0515 00:36:52.357375 2472 log.go:172] (0xc000a360b0) (0xc000422dc0) Create stream\nI0515 00:36:52.357390 2472 log.go:172] (0xc000a360b0) (0xc000422dc0) Stream added, broadcasting: 3\nI0515 00:36:52.358004 2472 log.go:172] (0xc000a360b0) Reply frame received for 3\nI0515 00:36:52.358037 2472 log.go:172] (0xc000a360b0) (0xc0005385a0) Create stream\nI0515 00:36:52.358053 2472 log.go:172] (0xc000a360b0) (0xc0005385a0) Stream added, broadcasting: 5\nI0515 00:36:52.358734 2472 log.go:172] (0xc000a360b0) Reply frame received for 5\nI0515 00:36:52.435802 2472 log.go:172] (0xc000a360b0) Data frame received for 5\nI0515 00:36:52.435829 2472 log.go:172] (0xc0005385a0) (5) Data frame handling\nI0515 00:36:52.435850 2472 log.go:172] (0xc0005385a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:36:52.501609 2472 log.go:172] (0xc000a360b0) Data frame received for 5\nI0515 00:36:52.501631 2472 log.go:172] (0xc0005385a0) (5) Data frame handling\nI0515 00:36:52.501656 2472 log.go:172] (0xc000a360b0) Data frame received for 3\nI0515 00:36:52.501691 2472 log.go:172] (0xc000422dc0) (3) Data frame handling\nI0515 00:36:52.501704 2472 log.go:172] (0xc000422dc0) (3) Data frame sent\nI0515 00:36:52.501717 2472 log.go:172] (0xc000a360b0) Data frame received for 3\nI0515 00:36:52.501729 2472 log.go:172] (0xc000422dc0) (3) Data frame handling\nI0515 00:36:52.502942 2472 log.go:172] (0xc000a360b0) Data frame received for 1\nI0515 00:36:52.502953 2472 log.go:172] (0xc0005a0fa0) (1) Data frame handling\nI0515 00:36:52.502958 2472 log.go:172] (0xc0005a0fa0) (1) Data frame sent\nI0515 00:36:52.502965 2472 log.go:172] (0xc000a360b0) (0xc0005a0fa0) Stream removed, broadcasting: 1\nI0515 00:36:52.503014 2472 log.go:172] (0xc000a360b0) Go away received\nI0515 00:36:52.503131 2472 log.go:172] (0xc000a360b0) (0xc0005a0fa0) Stream removed, broadcasting: 1\nI0515 00:36:52.503140 2472 log.go:172] (0xc000a360b0) (0xc000422dc0) Stream removed, broadcasting: 3\nI0515 00:36:52.503145 2472 log.go:172] (0xc000a360b0) (0xc0005385a0) Stream removed, broadcasting: 5\n" May 15 00:36:52.506: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:36:52.506: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:36:52.509: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 15 00:37:02.513: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 00:37:02.513: INFO: Waiting for statefulset status.replicas updated to 0 May 15 00:37:02.544: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999617s May 15 00:37:03.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.97585845s May 15 00:37:04.551: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972456567s May 15 00:37:05.556: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968570142s May 15 00:37:06.560: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.963993195s May 15 00:37:07.564: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.959624186s May 15 00:37:08.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.955296104s May 15 00:37:09.574: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.950495123s May 15 00:37:10.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.945737953s May 15 00:37:11.584: INFO: Verifying statefulset ss doesn't scale past 1 for another 940.47279ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5742 May 15 00:37:12.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:37:12.792: INFO: stderr: "I0515 00:37:12.712096 2494 log.go:172] (0xc000b6cfd0) (0xc00073ae60) Create stream\nI0515 00:37:12.712160 2494 log.go:172] (0xc000b6cfd0) (0xc00073ae60) Stream added, broadcasting: 1\nI0515 00:37:12.716166 2494 log.go:172] (0xc000b6cfd0) Reply frame received for 1\nI0515 00:37:12.716196 2494 log.go:172] (0xc000b6cfd0) (0xc0007114a0) Create stream\nI0515 00:37:12.716204 2494 log.go:172] (0xc000b6cfd0) (0xc0007114a0) Stream added, broadcasting: 3\nI0515 00:37:12.717106 2494 log.go:172] (0xc000b6cfd0) Reply frame received for 3\nI0515 00:37:12.717310 2494 log.go:172] (0xc000b6cfd0) (0xc000704f00) Create stream\nI0515 00:37:12.717338 2494 log.go:172] (0xc000b6cfd0) (0xc000704f00) Stream added, broadcasting: 5\nI0515 00:37:12.718164 2494 log.go:172] (0xc000b6cfd0) Reply frame received for 5\nI0515 00:37:12.785765 2494 log.go:172] (0xc000b6cfd0) Data frame received for 3\nI0515 00:37:12.785795 2494 log.go:172] (0xc0007114a0) (3) Data frame handling\nI0515 00:37:12.785803 2494 log.go:172] (0xc0007114a0) (3) Data frame sent\nI0515 00:37:12.785808 2494 log.go:172] (0xc000b6cfd0) Data frame received for 3\nI0515 00:37:12.785812 2494 log.go:172] (0xc0007114a0) (3) Data frame handling\nI0515 00:37:12.785833 2494 log.go:172] (0xc000b6cfd0) Data frame received for 5\nI0515 00:37:12.785838 2494 log.go:172] (0xc000704f00) (5) Data frame handling\nI0515 00:37:12.785843 2494 log.go:172] (0xc000704f00) (5) Data frame sent\nI0515 00:37:12.785848 2494 log.go:172] (0xc000b6cfd0) Data frame received for 5\nI0515 00:37:12.785852 2494 log.go:172] (0xc000704f00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 00:37:12.787351 2494 log.go:172] (0xc000b6cfd0) Data frame received for 1\nI0515 00:37:12.787365 2494 log.go:172] (0xc00073ae60) (1) Data frame handling\nI0515 00:37:12.787376 2494 log.go:172] (0xc00073ae60) (1) Data frame sent\nI0515 00:37:12.787393 2494 log.go:172] (0xc000b6cfd0) (0xc00073ae60) Stream removed, broadcasting: 1\nI0515 00:37:12.787527 2494 log.go:172] (0xc000b6cfd0) Go away received\nI0515 00:37:12.787725 2494 log.go:172] (0xc000b6cfd0) (0xc00073ae60) Stream removed, broadcasting: 1\nI0515 00:37:12.787745 2494 log.go:172] (0xc000b6cfd0) (0xc0007114a0) Stream removed, broadcasting: 3\nI0515 00:37:12.787753 2494 log.go:172] (0xc000b6cfd0) (0xc000704f00) Stream removed, broadcasting: 5\n" May 15 00:37:12.792: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 00:37:12.792: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 00:37:12.796: INFO: Found 1 stateful pods, waiting for 3 May 15 00:37:22.801: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 15 00:37:22.801: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 15 00:37:22.801: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 15 00:37:22.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:37:23.050: INFO: stderr: "I0515 00:37:22.943365 2515 log.go:172] (0xc000a9f970) (0xc000b720a0) Create stream\nI0515 00:37:22.943421 2515 log.go:172] (0xc000a9f970) (0xc000b720a0) Stream added, broadcasting: 1\nI0515 00:37:22.947895 2515 log.go:172] (0xc000a9f970) Reply frame received for 1\nI0515 00:37:22.947954 2515 log.go:172] (0xc000a9f970) (0xc0006e2f00) Create stream\nI0515 00:37:22.947967 2515 log.go:172] (0xc000a9f970) (0xc0006e2f00) Stream added, broadcasting: 3\nI0515 00:37:22.948853 2515 log.go:172] (0xc000a9f970) Reply frame received for 3\nI0515 00:37:22.948894 2515 log.go:172] (0xc000a9f970) (0xc0004d2280) Create stream\nI0515 00:37:22.948909 2515 log.go:172] (0xc000a9f970) (0xc0004d2280) Stream added, broadcasting: 5\nI0515 00:37:22.950242 2515 log.go:172] (0xc000a9f970) Reply frame received for 5\nI0515 00:37:23.042782 2515 log.go:172] (0xc000a9f970) Data frame received for 5\nI0515 00:37:23.042829 2515 log.go:172] (0xc0004d2280) (5) Data frame handling\nI0515 00:37:23.042847 2515 log.go:172] (0xc0004d2280) (5) Data frame sent\nI0515 00:37:23.042863 2515 log.go:172] (0xc000a9f970) Data frame received for 5\nI0515 00:37:23.042873 2515 log.go:172] (0xc0004d2280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:37:23.042917 2515 log.go:172] (0xc000a9f970) Data frame received for 3\nI0515 00:37:23.042945 2515 log.go:172] (0xc0006e2f00) (3) Data frame handling\nI0515 00:37:23.042969 2515 log.go:172] (0xc0006e2f00) (3) Data frame sent\nI0515 00:37:23.042981 2515 log.go:172] (0xc000a9f970) Data frame received for 3\nI0515 00:37:23.042990 2515 log.go:172] (0xc0006e2f00) (3) Data frame handling\nI0515 00:37:23.044279 2515 log.go:172] (0xc000a9f970) Data frame received for 1\nI0515 00:37:23.044300 2515 log.go:172] (0xc000b720a0) (1) Data frame handling\nI0515 00:37:23.044316 2515 log.go:172] (0xc000b720a0) (1) Data frame sent\nI0515 00:37:23.044334 2515 log.go:172] (0xc000a9f970) (0xc000b720a0) Stream removed, broadcasting: 1\nI0515 00:37:23.044405 2515 log.go:172] (0xc000a9f970) Go away received\nI0515 00:37:23.044616 2515 log.go:172] (0xc000a9f970) (0xc000b720a0) Stream removed, broadcasting: 1\nI0515 00:37:23.044634 2515 log.go:172] (0xc000a9f970) (0xc0006e2f00) Stream removed, broadcasting: 3\nI0515 00:37:23.044643 2515 log.go:172] (0xc000a9f970) (0xc0004d2280) Stream removed, broadcasting: 5\n" May 15 00:37:23.050: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:37:23.050: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:37:23.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:37:23.292: INFO: stderr: "I0515 00:37:23.181379 2535 log.go:172] (0xc00080b080) (0xc000a02820) Create stream\nI0515 00:37:23.181441 2535 log.go:172] (0xc00080b080) (0xc000a02820) Stream added, broadcasting: 1\nI0515 00:37:23.184883 2535 log.go:172] (0xc00080b080) Reply frame received for 1\nI0515 00:37:23.184935 2535 log.go:172] (0xc00080b080) (0xc000534d20) Create stream\nI0515 00:37:23.184952 2535 log.go:172] (0xc00080b080) (0xc000534d20) Stream added, broadcasting: 3\nI0515 00:37:23.186339 2535 log.go:172] (0xc00080b080) Reply frame received for 3\nI0515 00:37:23.186382 2535 log.go:172] (0xc00080b080) (0xc0002921e0) Create stream\nI0515 00:37:23.186397 2535 log.go:172] (0xc00080b080) (0xc0002921e0) Stream added, broadcasting: 5\nI0515 00:37:23.187561 2535 log.go:172] (0xc00080b080) Reply frame received for 5\nI0515 00:37:23.254124 2535 log.go:172] (0xc00080b080) Data frame received for 5\nI0515 00:37:23.254165 2535 log.go:172] (0xc0002921e0) (5) Data frame handling\nI0515 00:37:23.254198 2535 log.go:172] (0xc0002921e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:37:23.282452 2535 log.go:172] (0xc00080b080) Data frame received for 3\nI0515 00:37:23.282492 2535 log.go:172] (0xc000534d20) (3) Data frame handling\nI0515 00:37:23.282521 2535 log.go:172] (0xc000534d20) (3) Data frame sent\nI0515 00:37:23.282543 2535 log.go:172] (0xc00080b080) Data frame received for 3\nI0515 00:37:23.282568 2535 log.go:172] (0xc000534d20) (3) Data frame handling\nI0515 00:37:23.282592 2535 log.go:172] (0xc00080b080) Data frame received for 5\nI0515 00:37:23.282613 2535 log.go:172] (0xc0002921e0) (5) Data frame handling\nI0515 00:37:23.284344 2535 log.go:172] (0xc00080b080) Data frame received for 1\nI0515 00:37:23.284380 2535 log.go:172] (0xc000a02820) (1) Data frame handling\nI0515 00:37:23.284416 2535 log.go:172] (0xc000a02820) (1) Data frame sent\nI0515 00:37:23.284451 2535 log.go:172] (0xc00080b080) (0xc000a02820) Stream removed, broadcasting: 1\nI0515 00:37:23.284625 2535 log.go:172] (0xc00080b080) Go away received\nI0515 00:37:23.285090 2535 log.go:172] (0xc00080b080) (0xc000a02820) Stream removed, broadcasting: 1\nI0515 00:37:23.285326 2535 log.go:172] (0xc00080b080) (0xc000534d20) Stream removed, broadcasting: 3\nI0515 00:37:23.285347 2535 log.go:172] (0xc00080b080) (0xc0002921e0) Stream removed, broadcasting: 5\n" May 15 00:37:23.292: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:37:23.292: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:37:23.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 00:37:23.541: INFO: stderr: "I0515 00:37:23.423383 2555 log.go:172] (0xc000b3b8c0) (0xc000bbe140) Create stream\nI0515 00:37:23.423446 2555 log.go:172] (0xc000b3b8c0) (0xc000bbe140) Stream added, broadcasting: 1\nI0515 00:37:23.428328 2555 log.go:172] (0xc000b3b8c0) Reply frame received for 1\nI0515 00:37:23.428376 2555 log.go:172] (0xc000b3b8c0) (0xc000868dc0) Create stream\nI0515 00:37:23.428387 2555 log.go:172] (0xc000b3b8c0) (0xc000868dc0) Stream added, broadcasting: 3\nI0515 00:37:23.429498 2555 log.go:172] (0xc000b3b8c0) Reply frame received for 3\nI0515 00:37:23.429540 2555 log.go:172] (0xc000b3b8c0) (0xc00085e500) Create stream\nI0515 00:37:23.429559 2555 log.go:172] (0xc000b3b8c0) (0xc00085e500) Stream added, broadcasting: 5\nI0515 00:37:23.430609 2555 log.go:172] (0xc000b3b8c0) Reply frame received for 5\nI0515 00:37:23.497309 2555 log.go:172] (0xc000b3b8c0) Data frame received for 5\nI0515 00:37:23.497347 2555 log.go:172] (0xc00085e500) (5) Data frame handling\nI0515 00:37:23.497362 2555 log.go:172] (0xc00085e500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 00:37:23.534366 2555 log.go:172] (0xc000b3b8c0) Data frame received for 5\nI0515 00:37:23.534420 2555 log.go:172] (0xc00085e500) (5) Data frame handling\nI0515 00:37:23.534456 2555 log.go:172] (0xc000b3b8c0) Data frame received for 3\nI0515 00:37:23.534480 2555 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0515 00:37:23.534514 2555 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0515 00:37:23.534537 2555 log.go:172] (0xc000b3b8c0) Data frame received for 3\nI0515 00:37:23.534563 2555 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0515 00:37:23.536125 2555 log.go:172] (0xc000b3b8c0) Data frame received for 1\nI0515 00:37:23.536181 2555 log.go:172] (0xc000bbe140) (1) Data frame handling\nI0515 00:37:23.536216 2555 log.go:172] (0xc000bbe140) (1) Data frame sent\nI0515 00:37:23.536239 2555 log.go:172] (0xc000b3b8c0) (0xc000bbe140) Stream removed, broadcasting: 1\nI0515 00:37:23.536282 2555 log.go:172] (0xc000b3b8c0) Go away received\nI0515 00:37:23.536495 2555 log.go:172] (0xc000b3b8c0) (0xc000bbe140) Stream removed, broadcasting: 1\nI0515 00:37:23.536513 2555 log.go:172] (0xc000b3b8c0) (0xc000868dc0) Stream removed, broadcasting: 3\nI0515 00:37:23.536522 2555 log.go:172] (0xc000b3b8c0) (0xc00085e500) Stream removed, broadcasting: 5\n" May 15 00:37:23.541: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 00:37:23.541: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 00:37:23.541: INFO: Waiting for statefulset status.replicas updated to 0 May 15 00:37:23.545: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 15 00:37:33.554: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 00:37:33.554: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 15 00:37:33.554: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 15 00:37:33.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999597s May 15 00:37:34.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992457455s May 15 00:37:35.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981433977s May 15 00:37:36.603: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.976599684s May 15 00:37:37.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.957266443s May 15 00:37:38.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952720331s May 15 00:37:39.617: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.948584495s May 15 00:37:40.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.943821873s May 15 00:37:41.626: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.939879646s May 15 00:37:42.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 934.7164ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5742 May 15 00:37:43.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:37:43.865: INFO: stderr: "I0515 00:37:43.771624 2576 log.go:172] (0xc000c07970) (0xc000725400) Create stream\nI0515 00:37:43.771685 2576 log.go:172] (0xc000c07970) (0xc000725400) Stream added, broadcasting: 1\nI0515 00:37:43.775919 2576 log.go:172] (0xc000c07970) Reply frame received for 1\nI0515 00:37:43.775958 2576 log.go:172] (0xc000c07970) (0xc00024e000) Create stream\nI0515 00:37:43.775978 2576 log.go:172] (0xc000c07970) (0xc00024e000) Stream added, broadcasting: 3\nI0515 00:37:43.776964 2576 log.go:172] (0xc000c07970) Reply frame received for 3\nI0515 00:37:43.777010 2576 log.go:172] (0xc000c07970) (0xc0006a4960) Create stream\nI0515 00:37:43.777024 2576 log.go:172] (0xc000c07970) (0xc0006a4960) Stream added, broadcasting: 5\nI0515 00:37:43.778081 2576 log.go:172] (0xc000c07970) Reply frame received for 5\nI0515 00:37:43.859084 2576 log.go:172] (0xc000c07970) Data frame received for 5\nI0515 00:37:43.859129 2576 log.go:172] (0xc0006a4960) (5) Data frame handling\nI0515 00:37:43.859143 2576 log.go:172] (0xc0006a4960) (5) Data frame sent\nI0515 00:37:43.859161 2576 log.go:172] (0xc000c07970) Data frame received for 5\nI0515 00:37:43.859175 2576 log.go:172] (0xc0006a4960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 00:37:43.859218 2576 log.go:172] (0xc000c07970) Data frame received for 3\nI0515 00:37:43.859237 2576 log.go:172] (0xc00024e000) (3) Data frame handling\nI0515 00:37:43.859257 2576 log.go:172] (0xc00024e000) (3) Data frame sent\nI0515 00:37:43.859281 2576 log.go:172] (0xc000c07970) Data frame received for 3\nI0515 00:37:43.859300 2576 log.go:172] (0xc00024e000) (3) Data frame handling\nI0515 00:37:43.860723 2576 log.go:172] (0xc000c07970) Data frame received for 1\nI0515 00:37:43.860750 2576 log.go:172] (0xc000725400) (1) Data frame handling\nI0515 00:37:43.860760 2576 log.go:172] (0xc000725400) (1) Data frame sent\nI0515 00:37:43.860774 2576 log.go:172] (0xc000c07970) (0xc000725400) Stream removed, broadcasting: 1\nI0515 00:37:43.860816 2576 log.go:172] (0xc000c07970) Go away received\nI0515 00:37:43.861264 2576 log.go:172] (0xc000c07970) (0xc000725400) Stream removed, broadcasting: 1\nI0515 00:37:43.861284 2576 log.go:172] (0xc000c07970) (0xc00024e000) Stream removed, broadcasting: 3\nI0515 00:37:43.861293 2576 log.go:172] (0xc000c07970) (0xc0006a4960) Stream removed, broadcasting: 5\n" May 15 00:37:43.865: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 00:37:43.865: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 00:37:43.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:37:44.155: INFO: stderr: "I0515 00:37:44.075789 2597 log.go:172] (0xc0009d5290) (0xc000862fa0) Create stream\nI0515 00:37:44.075864 2597 log.go:172] (0xc0009d5290) (0xc000862fa0) Stream added, broadcasting: 1\nI0515 00:37:44.080333 2597 log.go:172] (0xc0009d5290) Reply frame received for 1\nI0515 00:37:44.080391 2597 log.go:172] (0xc0009d5290) (0xc0008595e0) Create stream\nI0515 00:37:44.080412 2597 log.go:172] (0xc0009d5290) (0xc0008595e0) Stream added, broadcasting: 3\nI0515 00:37:44.081712 2597 log.go:172] (0xc0009d5290) Reply frame received for 3\nI0515 00:37:44.081755 2597 log.go:172] (0xc0009d5290) (0xc000672640) Create stream\nI0515 00:37:44.081772 2597 log.go:172] (0xc0009d5290) (0xc000672640) Stream added, broadcasting: 5\nI0515 00:37:44.082863 2597 log.go:172] (0xc0009d5290) Reply frame received for 5\nI0515 00:37:44.147678 2597 log.go:172] (0xc0009d5290) Data frame received for 3\nI0515 00:37:44.147712 2597 log.go:172] (0xc0008595e0) (3) Data frame handling\nI0515 00:37:44.147728 2597 log.go:172] (0xc0008595e0) (3) Data frame sent\nI0515 00:37:44.147738 2597 log.go:172] (0xc0009d5290) Data frame received for 3\nI0515 00:37:44.147748 2597 log.go:172] (0xc0008595e0) (3) Data frame handling\nI0515 00:37:44.147767 2597 log.go:172] (0xc0009d5290) Data frame received for 5\nI0515 00:37:44.147779 2597 log.go:172] (0xc000672640) (5) Data frame handling\nI0515 00:37:44.147796 2597 log.go:172] (0xc000672640) (5) Data frame sent\nI0515 00:37:44.147806 2597 log.go:172] (0xc0009d5290) Data frame received for 5\nI0515 00:37:44.147814 2597 log.go:172] (0xc000672640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 00:37:44.149416 2597 log.go:172] (0xc0009d5290) Data frame received for 1\nI0515 00:37:44.149443 2597 log.go:172] (0xc000862fa0) (1) Data frame handling\nI0515 00:37:44.149461 2597 log.go:172] (0xc000862fa0) (1) Data frame sent\nI0515 00:37:44.149481 2597 log.go:172] (0xc0009d5290) (0xc000862fa0) Stream removed, broadcasting: 1\nI0515 00:37:44.149942 2597 log.go:172] (0xc0009d5290) (0xc000862fa0) Stream removed, broadcasting: 1\nI0515 00:37:44.149970 2597 log.go:172] (0xc0009d5290) (0xc0008595e0) Stream removed, broadcasting: 3\nI0515 00:37:44.150198 2597 log.go:172] (0xc0009d5290) (0xc000672640) Stream removed, broadcasting: 5\n" May 15 00:37:44.155: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 00:37:44.155: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 00:37:44.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5742 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 00:37:44.379: INFO: stderr: "I0515 00:37:44.300348 2617 log.go:172] (0xc000b458c0) (0xc00074c0a0) Create stream\nI0515 00:37:44.300412 2617 log.go:172] (0xc000b458c0) (0xc00074c0a0) Stream added, broadcasting: 1\nI0515 00:37:44.308432 2617 log.go:172] (0xc000b458c0) Reply frame received for 1\nI0515 00:37:44.308478 2617 log.go:172] (0xc000b458c0) (0xc00073b0e0) Create stream\nI0515 00:37:44.308491 2617 log.go:172] (0xc000b458c0) (0xc00073b0e0) Stream added, broadcasting: 3\nI0515 00:37:44.310001 2617 log.go:172] (0xc000b458c0) Reply frame received for 3\nI0515 00:37:44.310027 2617 log.go:172] (0xc000b458c0) (0xc00070cbe0) Create stream\nI0515 00:37:44.310037 2617 log.go:172] (0xc000b458c0) (0xc00070cbe0) Stream added, broadcasting: 5\nI0515 00:37:44.312118 2617 log.go:172] (0xc000b458c0) Reply frame received for 5\nI0515 00:37:44.372981 2617 log.go:172] (0xc000b458c0) Data frame received for 3\nI0515 00:37:44.373021 2617 log.go:172] (0xc00073b0e0) (3) Data frame handling\nI0515 00:37:44.373039 2617 log.go:172] (0xc00073b0e0) (3) Data frame sent\nI0515 00:37:44.373056 2617 log.go:172] (0xc000b458c0) Data frame received for 3\nI0515 00:37:44.373070 2617 log.go:172] (0xc00073b0e0) (3) Data frame handling\nI0515 00:37:44.373096 2617 log.go:172] (0xc000b458c0) Data frame received for 5\nI0515 00:37:44.373283 2617 log.go:172] (0xc00070cbe0) (5) Data frame handling\nI0515 00:37:44.373321 2617 log.go:172] (0xc00070cbe0) (5) Data frame sent\nI0515 00:37:44.373347 2617 log.go:172] (0xc000b458c0) Data frame received for 5\nI0515 00:37:44.373362 2617 log.go:172] (0xc00070cbe0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 00:37:44.374663 2617 log.go:172] (0xc000b458c0) Data frame received for 1\nI0515 00:37:44.374694 2617 log.go:172] (0xc00074c0a0) (1) Data frame handling\nI0515 00:37:44.374719 2617 log.go:172] (0xc00074c0a0) (1) Data frame sent\nI0515 00:37:44.374737 2617 log.go:172] (0xc000b458c0) (0xc00074c0a0) Stream removed, broadcasting: 1\nI0515 00:37:44.374755 2617 log.go:172] (0xc000b458c0) Go away received\nI0515 00:37:44.375113 2617 log.go:172] (0xc000b458c0) (0xc00074c0a0) Stream removed, broadcasting: 1\nI0515 00:37:44.375126 2617 log.go:172] (0xc000b458c0) (0xc00073b0e0) Stream removed, broadcasting: 3\nI0515 00:37:44.375133 2617 log.go:172] (0xc000b458c0) (0xc00070cbe0) Stream removed, broadcasting: 5\n" May 15 00:37:44.379: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 00:37:44.379: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 00:37:44.379: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 15 00:38:04.400: INFO: Deleting all statefulset in ns statefulset-5742 May 15 00:38:04.402: INFO: Scaling statefulset ss to 0 May 15 00:38:04.410: INFO: Waiting for statefulset status.replicas updated to 0 May 15 00:38:04.412: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:38:04.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5742" for this suite. • [SLOW TEST:82.611 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":166,"skipped":2644,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:38:04.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:38:04.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 15 00:38:05.269: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T00:38:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-15T00:38:05Z]] name:name1 resourceVersion:4682972 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2f60b478-6f4a-44ca-a33f-c71ca3e50e2e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 15 00:38:15.275: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T00:38:15Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-15T00:38:15Z]] name:name2 resourceVersion:4683042 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:998de3ba-b867-4ac3-8617-90ed024b6369] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 15 00:38:25.282: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T00:38:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-15T00:38:25Z]] name:name1 resourceVersion:4683081 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2f60b478-6f4a-44ca-a33f-c71ca3e50e2e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 15 00:38:35.290: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T00:38:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-15T00:38:35Z]] name:name2 resourceVersion:4683116 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:998de3ba-b867-4ac3-8617-90ed024b6369] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 15 00:38:45.299: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T00:38:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-15T00:38:25Z]] name:name1 resourceVersion:4683156 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2f60b478-6f4a-44ca-a33f-c71ca3e50e2e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 15 00:38:55.333: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T00:38:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-15T00:38:35Z]] name:name2 resourceVersion:4683194 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:998de3ba-b867-4ac3-8617-90ed024b6369] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:39:05.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2834" for this suite. • [SLOW TEST:61.452 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":167,"skipped":2652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:39:05.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-a28ab7dc-0322-4154-88e8-d264be7c4a80 STEP: Creating configMap with name cm-test-opt-upd-a76a580c-23f9-4fa4-a2db-48ca61777dbd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a28ab7dc-0322-4154-88e8-d264be7c4a80 STEP: Updating configmap cm-test-opt-upd-a76a580c-23f9-4fa4-a2db-48ca61777dbd STEP: Creating configMap with name cm-test-opt-create-2e3e7598-7624-48a2-b1ea-89eab5930766 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:39:16.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2594" for this suite. • [SLOW TEST:10.505 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":168,"skipped":2712,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:39:16.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:39:49.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2505" for this suite. • [SLOW TEST:33.295 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2718,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:39:49.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-b325ae24-eff4-4d06-91cd-240d94705c69 STEP: Creating secret with name secret-projected-all-test-volume-2e12b0c2-886a-449d-afa8-67af3a71165f STEP: Creating a pod to test Check all projections for projected volume plugin May 15 00:39:49.791: INFO: Waiting up to 5m0s for pod "projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a" in namespace "projected-1814" to be "Succeeded or Failed" May 15 00:39:49.815: INFO: Pod "projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.59472ms May 15 00:39:51.892: INFO: Pod "projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101441257s May 15 00:39:53.928: INFO: Pod "projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136900952s STEP: Saw pod success May 15 00:39:53.928: INFO: Pod "projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a" satisfied condition "Succeeded or Failed" May 15 00:39:53.931: INFO: Trying to get logs from node latest-worker2 pod projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a container projected-all-volume-test: STEP: delete the pod May 15 00:39:54.283: INFO: Waiting for pod projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a to disappear May 15 00:39:54.296: INFO: Pod projected-volume-97efb5bc-5cdc-4329-b872-3409874cdc0a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:39:54.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1814" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2725,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:39:54.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 15 00:39:55.177: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 15 00:39:57.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:39:59.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725099995, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:40:02.223: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:40:02.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:40:03.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5175" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.288 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":171,"skipped":2741,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:40:03.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 15 00:40:03.664: INFO: Waiting up to 5m0s for pod "pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6" in namespace "emptydir-6012" to be "Succeeded or Failed" May 15 00:40:03.668: INFO: Pod "pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821443ms May 15 00:40:05.676: INFO: Pod "pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012140017s May 15 00:40:07.680: INFO: Pod "pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016126631s STEP: Saw pod success May 15 00:40:07.680: INFO: Pod "pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6" satisfied condition "Succeeded or Failed" May 15 00:40:07.683: INFO: Trying to get logs from node latest-worker2 pod pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6 container test-container: STEP: delete the pod May 15 00:40:07.725: INFO: Waiting for pod pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6 to disappear May 15 00:40:07.739: INFO: Pod pod-7a284ce4-4c2d-438b-b2d2-9d1bb60f5cd6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:40:07.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6012" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:40:07.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 in namespace container-probe-5121 May 15 00:40:11.856: INFO: Started pod liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 in namespace container-probe-5121 STEP: checking the pod's current state and verifying that restartCount is present May 15 00:40:11.859: INFO: Initial restart count of pod liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 is 0 May 15 00:40:29.922: INFO: Restart count of pod container-probe-5121/liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 is now 1 (18.062824392s elapsed) May 15 00:40:49.976: INFO: Restart count of pod container-probe-5121/liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 is now 2 (38.11665372s elapsed) May 15 00:41:10.056: INFO: Restart count of pod container-probe-5121/liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 is now 3 (58.196862442s elapsed) May 15 00:41:30.148: INFO: Restart count of pod container-probe-5121/liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 is now 4 (1m18.289185902s elapsed) May 15 00:42:30.275: INFO: Restart count of pod container-probe-5121/liveness-3b64b2e7-e65a-48e2-ac83-9b72acaace54 is now 5 (2m18.415739655s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:42:30.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5121" for this suite. • [SLOW TEST:142.615 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2777,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:42:30.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 15 00:42:34.766: INFO: Pod pod-hostip-8725584b-913c-4a2c-8eb0-c50accf92b3e has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:42:34.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6792" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:42:34.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 15 00:42:35.719: INFO: Pod name wrapped-volume-race-7d3c6a41-d565-4f69-bffe-6951f5699be4: Found 0 pods out of 5 May 15 00:42:40.739: INFO: Pod name wrapped-volume-race-7d3c6a41-d565-4f69-bffe-6951f5699be4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7d3c6a41-d565-4f69-bffe-6951f5699be4 in namespace emptydir-wrapper-3966, will wait for the garbage collector to delete the pods May 15 00:42:52.851: INFO: Deleting ReplicationController wrapped-volume-race-7d3c6a41-d565-4f69-bffe-6951f5699be4 took: 5.002299ms May 15 00:42:53.251: INFO: Terminating ReplicationController wrapped-volume-race-7d3c6a41-d565-4f69-bffe-6951f5699be4 pods took: 400.242633ms STEP: Creating RC which spawns configmap-volume pods May 15 00:43:05.284: INFO: Pod name wrapped-volume-race-dde569cd-84a7-420d-a8cc-c664677bc348: Found 0 pods out of 5 May 15 00:43:10.291: INFO: Pod name wrapped-volume-race-dde569cd-84a7-420d-a8cc-c664677bc348: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dde569cd-84a7-420d-a8cc-c664677bc348 in namespace emptydir-wrapper-3966, will wait for the garbage collector to delete the pods May 15 00:43:26.463: INFO: Deleting ReplicationController wrapped-volume-race-dde569cd-84a7-420d-a8cc-c664677bc348 took: 96.58815ms May 15 00:43:26.963: INFO: Terminating ReplicationController wrapped-volume-race-dde569cd-84a7-420d-a8cc-c664677bc348 pods took: 500.213916ms STEP: Creating RC which spawns configmap-volume pods May 15 00:43:35.508: INFO: Pod name wrapped-volume-race-3ccb4de1-688a-404f-97fd-79ca42a46b60: Found 0 pods out of 5 May 15 00:43:40.516: INFO: Pod name wrapped-volume-race-3ccb4de1-688a-404f-97fd-79ca42a46b60: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3ccb4de1-688a-404f-97fd-79ca42a46b60 in namespace emptydir-wrapper-3966, will wait for the garbage collector to delete the pods May 15 00:43:56.761: INFO: Deleting ReplicationController wrapped-volume-race-3ccb4de1-688a-404f-97fd-79ca42a46b60 took: 51.333257ms May 15 00:43:57.161: INFO: Terminating ReplicationController wrapped-volume-race-3ccb4de1-688a-404f-97fd-79ca42a46b60 pods took: 400.397234ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:44:16.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3966" for this suite. • [SLOW TEST:101.628 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":175,"skipped":2844,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:44:16.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-6b9f9777-4880-41da-beb0-865a417887af in namespace container-probe-5747 May 15 00:44:20.536: INFO: Started pod liveness-6b9f9777-4880-41da-beb0-865a417887af in namespace container-probe-5747 STEP: checking the pod's current state and verifying that restartCount is present May 15 00:44:20.540: INFO: Initial restart count of pod liveness-6b9f9777-4880-41da-beb0-865a417887af is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:48:21.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5747" for this suite. • [SLOW TEST:244.921 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2849,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:48:21.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:48:21.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8" in namespace "downward-api-8482" to be "Succeeded or Failed" May 15 00:48:21.669: INFO: Pod "downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.436961ms May 15 00:48:23.672: INFO: Pod "downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012223331s May 15 00:48:25.676: INFO: Pod "downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015827426s STEP: Saw pod success May 15 00:48:25.676: INFO: Pod "downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8" satisfied condition "Succeeded or Failed" May 15 00:48:25.679: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8 container client-container: STEP: delete the pod May 15 00:48:25.858: INFO: Waiting for pod downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8 to disappear May 15 00:48:25.895: INFO: Pod downwardapi-volume-792c55ec-32ec-4aa1-a970-7564aed76cd8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:48:25.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8482" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":2863,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:48:25.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 15 00:48:30.258: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3368 PodName:var-expansion-894ecc4b-db0d-4fb5-8f6d-f217ae6f46b7 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:48:30.258: INFO: >>> kubeConfig: /root/.kube/config I0515 00:48:30.291005 7 log.go:172] (0xc002e2f6b0) (0xc000abb680) Create stream I0515 00:48:30.291065 7 log.go:172] (0xc002e2f6b0) (0xc000abb680) Stream added, broadcasting: 1 I0515 00:48:30.293284 7 log.go:172] (0xc002e2f6b0) Reply frame received for 1 I0515 00:48:30.293327 7 log.go:172] (0xc002e2f6b0) (0xc000abb900) Create stream I0515 00:48:30.293344 7 log.go:172] (0xc002e2f6b0) (0xc000abb900) Stream added, broadcasting: 3 I0515 00:48:30.294465 7 log.go:172] (0xc002e2f6b0) Reply frame received for 3 I0515 00:48:30.294505 7 log.go:172] (0xc002e2f6b0) (0xc001393040) Create stream I0515 00:48:30.294523 7 log.go:172] (0xc002e2f6b0) (0xc001393040) Stream added, broadcasting: 5 I0515 00:48:30.295706 7 log.go:172] (0xc002e2f6b0) Reply frame received for 5 I0515 00:48:30.379273 7 log.go:172] (0xc002e2f6b0) Data frame received for 3 I0515 00:48:30.379315 7 log.go:172] (0xc000abb900) (3) Data frame handling I0515 00:48:30.379362 7 log.go:172] (0xc002e2f6b0) Data frame received for 5 I0515 00:48:30.379392 7 log.go:172] (0xc001393040) (5) Data frame handling I0515 00:48:30.380598 7 log.go:172] (0xc002e2f6b0) Data frame received for 1 I0515 00:48:30.380624 7 log.go:172] (0xc000abb680) (1) Data frame handling I0515 00:48:30.380657 7 log.go:172] (0xc000abb680) (1) Data frame sent I0515 00:48:30.380686 7 log.go:172] (0xc002e2f6b0) (0xc000abb680) Stream removed, broadcasting: 1 I0515 00:48:30.380825 7 log.go:172] (0xc002e2f6b0) (0xc000abb680) Stream removed, broadcasting: 1 I0515 00:48:30.380851 7 log.go:172] (0xc002e2f6b0) (0xc000abb900) Stream removed, broadcasting: 3 I0515 00:48:30.380895 7 log.go:172] (0xc002e2f6b0) Go away received I0515 00:48:30.381091 7 log.go:172] (0xc002e2f6b0) (0xc001393040) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 15 00:48:30.385: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3368 PodName:var-expansion-894ecc4b-db0d-4fb5-8f6d-f217ae6f46b7 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 00:48:30.385: INFO: >>> kubeConfig: /root/.kube/config I0515 00:48:30.420627 7 log.go:172] (0xc002700000) (0xc001172640) Create stream I0515 00:48:30.420666 7 log.go:172] (0xc002700000) (0xc001172640) Stream added, broadcasting: 1 I0515 00:48:30.422710 7 log.go:172] (0xc002700000) Reply frame received for 1 I0515 00:48:30.422740 7 log.go:172] (0xc002700000) (0xc0011726e0) Create stream I0515 00:48:30.422750 7 log.go:172] (0xc002700000) (0xc0011726e0) Stream added, broadcasting: 3 I0515 00:48:30.423675 7 log.go:172] (0xc002700000) Reply frame received for 3 I0515 00:48:30.423699 7 log.go:172] (0xc002700000) (0xc001393360) Create stream I0515 00:48:30.423707 7 log.go:172] (0xc002700000) (0xc001393360) Stream added, broadcasting: 5 I0515 00:48:30.424482 7 log.go:172] (0xc002700000) Reply frame received for 5 I0515 00:48:30.493782 7 log.go:172] (0xc002700000) Data frame received for 3 I0515 00:48:30.493819 7 log.go:172] (0xc0011726e0) (3) Data frame handling I0515 00:48:30.493849 7 log.go:172] (0xc002700000) Data frame received for 5 I0515 00:48:30.493861 7 log.go:172] (0xc001393360) (5) Data frame handling I0515 00:48:30.495571 7 log.go:172] (0xc002700000) Data frame received for 1 I0515 00:48:30.495581 7 log.go:172] (0xc001172640) (1) Data frame handling I0515 00:48:30.495588 7 log.go:172] (0xc001172640) (1) Data frame sent I0515 00:48:30.495595 7 log.go:172] (0xc002700000) (0xc001172640) Stream removed, broadcasting: 1 I0515 00:48:30.495605 7 log.go:172] (0xc002700000) Go away received I0515 00:48:30.495794 7 log.go:172] (0xc002700000) (0xc001172640) Stream removed, broadcasting: 1 I0515 00:48:30.495827 7 log.go:172] (0xc002700000) (0xc0011726e0) Stream removed, broadcasting: 3 I0515 00:48:30.495861 7 log.go:172] (0xc002700000) (0xc001393360) Stream removed, broadcasting: 5 STEP: updating the annotation value May 15 00:48:31.004: INFO: Successfully updated pod "var-expansion-894ecc4b-db0d-4fb5-8f6d-f217ae6f46b7" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 15 00:48:31.032: INFO: Deleting pod "var-expansion-894ecc4b-db0d-4fb5-8f6d-f217ae6f46b7" in namespace "var-expansion-3368" May 15 00:48:31.037: INFO: Wait up to 5m0s for pod "var-expansion-894ecc4b-db0d-4fb5-8f6d-f217ae6f46b7" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:49:17.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3368" for this suite. • [SLOW TEST:51.160 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":178,"skipped":2867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:49:17.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4486 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4486 STEP: creating replication controller externalsvc in namespace services-4486 I0515 00:49:17.437492 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4486, replica count: 2 I0515 00:49:20.487902 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:49:23.488165 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 15 00:49:23.527: INFO: Creating new exec pod May 15 00:49:27.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4486 execpodr2ppm -- /bin/sh -x -c nslookup clusterip-service' May 15 00:49:31.839: INFO: stderr: "I0515 00:49:31.379774 2639 log.go:172] (0xc000d9c6e0) (0xc000705720) Create stream\nI0515 00:49:31.379811 2639 log.go:172] (0xc000d9c6e0) (0xc000705720) Stream added, broadcasting: 1\nI0515 00:49:31.382963 2639 log.go:172] (0xc000d9c6e0) Reply frame received for 1\nI0515 00:49:31.383008 2639 log.go:172] (0xc000d9c6e0) (0xc0006def00) Create stream\nI0515 00:49:31.383029 2639 log.go:172] (0xc000d9c6e0) (0xc0006def00) Stream added, broadcasting: 3\nI0515 00:49:31.383912 2639 log.go:172] (0xc000d9c6e0) Reply frame received for 3\nI0515 00:49:31.383963 2639 log.go:172] (0xc000d9c6e0) (0xc0006b6640) Create stream\nI0515 00:49:31.383976 2639 log.go:172] (0xc000d9c6e0) (0xc0006b6640) Stream added, broadcasting: 5\nI0515 00:49:31.384795 2639 log.go:172] (0xc000d9c6e0) Reply frame received for 5\nI0515 00:49:31.501026 2639 log.go:172] (0xc000d9c6e0) Data frame received for 5\nI0515 00:49:31.501071 2639 log.go:172] (0xc0006b6640) (5) Data frame handling\nI0515 00:49:31.501098 2639 log.go:172] (0xc0006b6640) (5) Data frame sent\n+ nslookup clusterip-service\nI0515 00:49:31.828312 2639 log.go:172] (0xc000d9c6e0) Data frame received for 3\nI0515 00:49:31.828336 2639 log.go:172] (0xc0006def00) (3) Data frame handling\nI0515 00:49:31.828352 2639 log.go:172] (0xc0006def00) (3) Data frame sent\nI0515 00:49:31.829566 2639 log.go:172] (0xc000d9c6e0) Data frame received for 3\nI0515 00:49:31.829595 2639 log.go:172] (0xc0006def00) (3) Data frame handling\nI0515 00:49:31.829616 2639 log.go:172] (0xc0006def00) (3) Data frame sent\nI0515 00:49:31.830185 2639 log.go:172] (0xc000d9c6e0) Data frame received for 5\nI0515 00:49:31.830201 2639 log.go:172] (0xc0006b6640) (5) Data frame handling\nI0515 00:49:31.830219 2639 log.go:172] (0xc000d9c6e0) Data frame received for 3\nI0515 00:49:31.830226 2639 log.go:172] (0xc0006def00) (3) Data frame handling\nI0515 00:49:31.832371 2639 log.go:172] (0xc000d9c6e0) Data frame received for 1\nI0515 00:49:31.832396 2639 log.go:172] (0xc000705720) (1) Data frame handling\nI0515 00:49:31.832407 2639 log.go:172] (0xc000705720) (1) Data frame sent\nI0515 00:49:31.832420 2639 log.go:172] (0xc000d9c6e0) (0xc000705720) Stream removed, broadcasting: 1\nI0515 00:49:31.832503 2639 log.go:172] (0xc000d9c6e0) Go away received\nI0515 00:49:31.832821 2639 log.go:172] (0xc000d9c6e0) (0xc000705720) Stream removed, broadcasting: 1\nI0515 00:49:31.832841 2639 log.go:172] (0xc000d9c6e0) (0xc0006def00) Stream removed, broadcasting: 3\nI0515 00:49:31.832852 2639 log.go:172] (0xc000d9c6e0) (0xc0006b6640) Stream removed, broadcasting: 5\n" May 15 00:49:31.839: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4486.svc.cluster.local\tcanonical name = externalsvc.services-4486.svc.cluster.local.\nName:\texternalsvc.services-4486.svc.cluster.local\nAddress: 10.106.18.198\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4486, will wait for the garbage collector to delete the pods May 15 00:49:31.957: INFO: Deleting ReplicationController externalsvc took: 10.159624ms May 15 00:49:32.357: INFO: Terminating ReplicationController externalsvc pods took: 400.344163ms May 15 00:49:45.330: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:49:45.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4486" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.294 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":179,"skipped":2902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:49:45.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-cl7j STEP: Creating a pod to test atomic-volume-subpath May 15 00:49:45.481: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cl7j" in namespace "subpath-1193" to be "Succeeded or Failed" May 15 00:49:45.485: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158032ms May 15 00:49:47.527: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045631903s May 15 00:49:49.531: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 4.050296932s May 15 00:49:51.535: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 6.054569777s May 15 00:49:53.545: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 8.063877935s May 15 00:49:55.549: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 10.068054846s May 15 00:49:57.562: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 12.081268525s May 15 00:49:59.566: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 14.085182942s May 15 00:50:01.570: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 16.089476093s May 15 00:50:03.574: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 18.093387034s May 15 00:50:05.578: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 20.097047291s May 15 00:50:07.581: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Running", Reason="", readiness=true. Elapsed: 22.10028551s May 15 00:50:09.586: INFO: Pod "pod-subpath-test-configmap-cl7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.105183283s STEP: Saw pod success May 15 00:50:09.586: INFO: Pod "pod-subpath-test-configmap-cl7j" satisfied condition "Succeeded or Failed" May 15 00:50:09.589: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-cl7j container test-container-subpath-configmap-cl7j: STEP: delete the pod May 15 00:50:09.632: INFO: Waiting for pod pod-subpath-test-configmap-cl7j to disappear May 15 00:50:09.832: INFO: Pod pod-subpath-test-configmap-cl7j no longer exists STEP: Deleting pod pod-subpath-test-configmap-cl7j May 15 00:50:09.832: INFO: Deleting pod "pod-subpath-test-configmap-cl7j" in namespace "subpath-1193" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:50:09.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1193" for this suite. • [SLOW TEST:24.483 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":180,"skipped":2950,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:50:09.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:50:10.354: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:50:12.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100610, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100610, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100610, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100610, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:50:15.474: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:50:15.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9385" for this suite. STEP: Destroying namespace "webhook-9385-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.251 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":181,"skipped":2950,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:50:16.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 15 00:50:16.396: INFO: Waiting up to 5m0s for pod "var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919" in namespace "var-expansion-9553" to be "Succeeded or Failed" May 15 00:50:16.666: INFO: Pod "var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919": Phase="Pending", Reason="", readiness=false. Elapsed: 269.868397ms May 15 00:50:18.670: INFO: Pod "var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273180375s May 15 00:50:20.674: INFO: Pod "var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.277274577s STEP: Saw pod success May 15 00:50:20.674: INFO: Pod "var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919" satisfied condition "Succeeded or Failed" May 15 00:50:20.676: INFO: Trying to get logs from node latest-worker pod var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919 container dapi-container: STEP: delete the pod May 15 00:50:20.938: INFO: Waiting for pod var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919 to disappear May 15 00:50:20.951: INFO: Pod var-expansion-7cbd3c27-cc6d-4e50-b3b4-d51a7e48d919 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:50:20.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9553" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":2952,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:50:20.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8170 STEP: creating service affinity-clusterip-transition in namespace services-8170 STEP: creating replication controller affinity-clusterip-transition in namespace services-8170 I0515 00:50:21.347439 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8170, replica count: 3 I0515 00:50:24.397871 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:50:27.398069 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:50:27.410: INFO: Creating new exec pod May 15 00:50:32.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8170 execpod-affinityr997s -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 15 00:50:32.679: INFO: stderr: "I0515 00:50:32.581071 2669 log.go:172] (0xc00022ad10) (0xc000a78320) Create stream\nI0515 00:50:32.581243 2669 log.go:172] (0xc00022ad10) (0xc000a78320) Stream added, broadcasting: 1\nI0515 00:50:32.585841 2669 log.go:172] (0xc00022ad10) Reply frame received for 1\nI0515 00:50:32.585882 2669 log.go:172] (0xc00022ad10) (0xc000630500) Create stream\nI0515 00:50:32.585894 2669 log.go:172] (0xc00022ad10) (0xc000630500) Stream added, broadcasting: 3\nI0515 00:50:32.586676 2669 log.go:172] (0xc00022ad10) Reply frame received for 3\nI0515 00:50:32.586707 2669 log.go:172] (0xc00022ad10) (0xc0005101e0) Create stream\nI0515 00:50:32.586718 2669 log.go:172] (0xc00022ad10) (0xc0005101e0) Stream added, broadcasting: 5\nI0515 00:50:32.587471 2669 log.go:172] (0xc00022ad10) Reply frame received for 5\nI0515 00:50:32.672353 2669 log.go:172] (0xc00022ad10) Data frame received for 5\nI0515 00:50:32.672489 2669 log.go:172] (0xc0005101e0) (5) Data frame handling\nI0515 00:50:32.672522 2669 log.go:172] (0xc0005101e0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0515 00:50:32.672543 2669 log.go:172] (0xc00022ad10) Data frame received for 5\nI0515 00:50:32.672560 2669 log.go:172] (0xc0005101e0) (5) Data frame handling\nI0515 00:50:32.672585 2669 log.go:172] (0xc0005101e0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0515 00:50:32.672860 2669 log.go:172] (0xc00022ad10) Data frame received for 3\nI0515 00:50:32.672907 2669 log.go:172] (0xc000630500) (3) Data frame handling\nI0515 00:50:32.672958 2669 log.go:172] (0xc00022ad10) Data frame received for 5\nI0515 00:50:32.673000 2669 log.go:172] (0xc0005101e0) (5) Data frame handling\nI0515 00:50:32.674469 2669 log.go:172] (0xc00022ad10) Data frame received for 1\nI0515 00:50:32.674487 2669 log.go:172] (0xc000a78320) (1) Data frame handling\nI0515 00:50:32.674499 2669 log.go:172] (0xc000a78320) (1) Data frame sent\nI0515 00:50:32.674521 2669 log.go:172] (0xc00022ad10) (0xc000a78320) Stream removed, broadcasting: 1\nI0515 00:50:32.674546 2669 log.go:172] (0xc00022ad10) Go away received\nI0515 00:50:32.674927 2669 log.go:172] (0xc00022ad10) (0xc000a78320) Stream removed, broadcasting: 1\nI0515 00:50:32.674950 2669 log.go:172] (0xc00022ad10) (0xc000630500) Stream removed, broadcasting: 3\nI0515 00:50:32.674963 2669 log.go:172] (0xc00022ad10) (0xc0005101e0) Stream removed, broadcasting: 5\n" May 15 00:50:32.679: INFO: stdout: "" May 15 00:50:32.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8170 execpod-affinityr997s -- /bin/sh -x -c nc -zv -t -w 2 10.106.26.66 80' May 15 00:50:32.863: INFO: stderr: "I0515 00:50:32.803844 2690 log.go:172] (0xc00003a370) (0xc0004dd220) Create stream\nI0515 00:50:32.803881 2690 log.go:172] (0xc00003a370) (0xc0004dd220) Stream added, broadcasting: 1\nI0515 00:50:32.806132 2690 log.go:172] (0xc00003a370) Reply frame received for 1\nI0515 00:50:32.806163 2690 log.go:172] (0xc00003a370) (0xc0002a2dc0) Create stream\nI0515 00:50:32.806173 2690 log.go:172] (0xc00003a370) (0xc0002a2dc0) Stream added, broadcasting: 3\nI0515 00:50:32.806762 2690 log.go:172] (0xc00003a370) Reply frame received for 3\nI0515 00:50:32.806810 2690 log.go:172] (0xc00003a370) (0xc0000f2e60) Create stream\nI0515 00:50:32.806832 2690 log.go:172] (0xc00003a370) (0xc0000f2e60) Stream added, broadcasting: 5\nI0515 00:50:32.807552 2690 log.go:172] (0xc00003a370) Reply frame received for 5\nI0515 00:50:32.859351 2690 log.go:172] (0xc00003a370) Data frame received for 5\nI0515 00:50:32.859368 2690 log.go:172] (0xc0000f2e60) (5) Data frame handling\nI0515 00:50:32.859385 2690 log.go:172] (0xc0000f2e60) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.26.66 80\nConnection to 10.106.26.66 80 port [tcp/http] succeeded!\nI0515 00:50:32.859405 2690 log.go:172] (0xc00003a370) Data frame received for 5\nI0515 00:50:32.859410 2690 log.go:172] (0xc0000f2e60) (5) Data frame handling\nI0515 00:50:32.859422 2690 log.go:172] (0xc00003a370) Data frame received for 3\nI0515 00:50:32.859444 2690 log.go:172] (0xc0002a2dc0) (3) Data frame handling\nI0515 00:50:32.860317 2690 log.go:172] (0xc00003a370) Data frame received for 1\nI0515 00:50:32.860324 2690 log.go:172] (0xc0004dd220) (1) Data frame handling\nI0515 00:50:32.860329 2690 log.go:172] (0xc0004dd220) (1) Data frame sent\nI0515 00:50:32.860385 2690 log.go:172] (0xc00003a370) (0xc0004dd220) Stream removed, broadcasting: 1\nI0515 00:50:32.860490 2690 log.go:172] (0xc00003a370) Go away received\nI0515 00:50:32.860569 2690 log.go:172] (0xc00003a370) (0xc0004dd220) Stream removed, broadcasting: 1\nI0515 00:50:32.860577 2690 log.go:172] (0xc00003a370) (0xc0002a2dc0) Stream removed, broadcasting: 3\nI0515 00:50:32.860581 2690 log.go:172] (0xc00003a370) (0xc0000f2e60) Stream removed, broadcasting: 5\n" May 15 00:50:32.863: INFO: stdout: "" May 15 00:50:32.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8170 execpod-affinityr997s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.26.66:80/ ; done' May 15 00:50:33.193: INFO: stderr: "I0515 00:50:33.025787 2713 log.go:172] (0xc0004702c0) (0xc0003d0d20) Create stream\nI0515 00:50:33.025825 2713 log.go:172] (0xc0004702c0) (0xc0003d0d20) Stream added, broadcasting: 1\nI0515 00:50:33.027447 2713 log.go:172] (0xc0004702c0) Reply frame received for 1\nI0515 00:50:33.027467 2713 log.go:172] (0xc0004702c0) (0xc000141720) Create stream\nI0515 00:50:33.027473 2713 log.go:172] (0xc0004702c0) (0xc000141720) Stream added, broadcasting: 3\nI0515 00:50:33.027957 2713 log.go:172] (0xc0004702c0) Reply frame received for 3\nI0515 00:50:33.027978 2713 log.go:172] (0xc0004702c0) (0xc000312e60) Create stream\nI0515 00:50:33.027986 2713 log.go:172] (0xc0004702c0) (0xc000312e60) Stream added, broadcasting: 5\nI0515 00:50:33.028508 2713 log.go:172] (0xc0004702c0) Reply frame received for 5\nI0515 00:50:33.079148 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.079184 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.079209 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ seq 0 15\nI0515 00:50:33.110271 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.110296 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.110309 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.110316 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.110323 2713 log.go:172] (0xc000312e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.110343 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.110466 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.110480 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.110496 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.116180 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.116190 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.116204 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.116825 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.116853 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.116866 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.116875 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.116885 2713 log.go:172] (0xc000312e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.116905 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.116921 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.116930 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.116943 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.120111 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.120128 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.120152 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.120384 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.120401 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.120412 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.120420 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.120429 2713 log.go:172] (0xc000312e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0515 00:50:33.120443 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.120503 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.120527 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.120552 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.120584 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.120605 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.120633 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n http://10.106.26.66:80/\nI0515 00:50:33.124481 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.124514 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.124570 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.125429 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.125441 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.125448 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.125478 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.125502 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.125536 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.128920 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.128938 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.128952 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.129462 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.129478 2713 log.go:172] (0xc000312e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.129488 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.129506 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.129516 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.129531 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.132459 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.132472 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.132479 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.132828 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.132836 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.132842 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.132854 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.132872 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.132892 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.136276 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.136288 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.136298 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.136482 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.136507 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.136527 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.136535 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.136541 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.136549 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.140022 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.140039 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.140055 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.140280 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.140289 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.140294 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.140356 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.140368 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.140376 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.147406 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.147425 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.147440 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.147932 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.147954 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.147965 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.147978 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.147991 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.148001 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.151824 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.151853 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.151893 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.152255 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.152264 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.152269 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.152275 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.152279 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.152284 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.156475 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.156503 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.156531 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.156863 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.156891 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.156903 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.156914 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.156920 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.156926 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.161455 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.161495 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.161527 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.162000 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.162026 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.162076 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.162094 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.162116 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.162147 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.166820 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.166845 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.166869 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.167268 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.167280 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.167288 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.167307 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.167320 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.167331 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.171953 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.171967 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.171979 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.172540 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.172556 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.172566 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.172572 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.172585 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.172597 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.176015 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.176040 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.176057 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.176423 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.176442 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.176460 2713 log.go:172] (0xc000312e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.176564 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.176583 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.176601 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.182101 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.182124 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.182142 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.182387 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.182412 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.182425 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.182449 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.182455 2713 log.go:172] (0xc000312e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.182466 2713 log.go:172] (0xc000312e60) (5) Data frame sent\nI0515 00:50:33.182472 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.182477 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.182485 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.185876 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.185900 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.185921 2713 log.go:172] (0xc000141720) (3) Data frame sent\nI0515 00:50:33.186509 2713 log.go:172] (0xc0004702c0) Data frame received for 5\nI0515 00:50:33.186538 2713 log.go:172] (0xc000312e60) (5) Data frame handling\nI0515 00:50:33.186620 2713 log.go:172] (0xc0004702c0) Data frame received for 3\nI0515 00:50:33.186648 2713 log.go:172] (0xc000141720) (3) Data frame handling\nI0515 00:50:33.188254 2713 log.go:172] (0xc0004702c0) Data frame received for 1\nI0515 00:50:33.188270 2713 log.go:172] (0xc0003d0d20) (1) Data frame handling\nI0515 00:50:33.188278 2713 log.go:172] (0xc0003d0d20) (1) Data frame sent\nI0515 00:50:33.188298 2713 log.go:172] (0xc0004702c0) (0xc0003d0d20) Stream removed, broadcasting: 1\nI0515 00:50:33.188530 2713 log.go:172] (0xc0004702c0) Go away received\nI0515 00:50:33.188668 2713 log.go:172] (0xc0004702c0) (0xc0003d0d20) Stream removed, broadcasting: 1\nI0515 00:50:33.188684 2713 log.go:172] (0xc0004702c0) (0xc000141720) Stream removed, broadcasting: 3\nI0515 00:50:33.188692 2713 log.go:172] (0xc0004702c0) (0xc000312e60) Stream removed, broadcasting: 5\n" May 15 00:50:33.194: INFO: stdout: "\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-cc6m9\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-cc6m9\naffinity-clusterip-transition-cc6m9\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-cc6m9\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-xnhc6\naffinity-clusterip-transition-cc6m9\naffinity-clusterip-transition-xnhc6" May 15 00:50:33.194: INFO: Received response from host: May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-cc6m9 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-cc6m9 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-cc6m9 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-cc6m9 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-cc6m9 May 15 00:50:33.194: INFO: Received response from host: affinity-clusterip-transition-xnhc6 May 15 00:50:33.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8170 execpod-affinityr997s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.26.66:80/ ; done' May 15 00:50:33.530: INFO: stderr: "I0515 00:50:33.387742 2732 log.go:172] (0xc000a711e0) (0xc000ba6320) Create stream\nI0515 00:50:33.387779 2732 log.go:172] (0xc000a711e0) (0xc000ba6320) Stream added, broadcasting: 1\nI0515 00:50:33.392689 2732 log.go:172] (0xc000a711e0) Reply frame received for 1\nI0515 00:50:33.392727 2732 log.go:172] (0xc000a711e0) (0xc000630dc0) Create stream\nI0515 00:50:33.392739 2732 log.go:172] (0xc000a711e0) (0xc000630dc0) Stream added, broadcasting: 3\nI0515 00:50:33.393831 2732 log.go:172] (0xc000a711e0) Reply frame received for 3\nI0515 00:50:33.393869 2732 log.go:172] (0xc000a711e0) (0xc00045e320) Create stream\nI0515 00:50:33.393881 2732 log.go:172] (0xc000a711e0) (0xc00045e320) Stream added, broadcasting: 5\nI0515 00:50:33.394779 2732 log.go:172] (0xc000a711e0) Reply frame received for 5\nI0515 00:50:33.437484 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.437508 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.437531 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.437548 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.437559 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.437566 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.443399 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.443417 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.443426 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.443894 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.443929 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.443949 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.443977 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.444001 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.444024 2732 log.go:172] (0xc00045e320) (5) Data frame sent\nI0515 00:50:33.444047 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.444063 2732 log.go:172] (0xc00045e320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.444094 2732 log.go:172] (0xc00045e320) (5) Data frame sent\nI0515 00:50:33.450274 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.450300 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.450319 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.450619 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.450638 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.450650 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.450696 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.450724 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.450742 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.454639 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.454662 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.454686 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.455048 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.455070 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.455082 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.455099 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.455110 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.455124 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.460425 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.460436 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.460449 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.460977 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.460987 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.460993 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.461001 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.461009 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.461019 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.465830 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.465840 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.465846 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.466249 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.466267 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.466285 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.466304 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.466318 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.466350 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.472608 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.472623 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.472636 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.473408 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.473425 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.473435 2732 log.go:172] (0xc00045e320) (5) Data frame sent\nI0515 00:50:33.473447 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.473457 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.473468 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.478750 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.478777 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.478786 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.478806 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.478827 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.478846 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.478860 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.478873 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.478901 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.482543 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.482560 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.482573 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.483227 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.483239 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.483248 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.483267 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.483289 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.483307 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.490764 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.490790 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.490814 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.491394 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.491403 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.491408 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.491431 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.491449 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.491466 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\nI0515 00:50:33.491481 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.491512 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.491527 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.496012 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.496030 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.496050 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.496550 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.496578 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.496602 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.496624 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.496634 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.496647 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\nI0515 00:50:33.496665 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.496707 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.496728 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.499324 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.499336 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.499344 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.499619 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.499634 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.499648 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.499817 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.499844 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.499870 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.510025 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.510050 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.510071 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.510078 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.510089 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.510095 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.510490 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.510510 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.510518 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.510838 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.510850 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.510856 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.510871 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.510880 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.510889 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.514912 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.514928 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.514938 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.515145 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.515157 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.515169 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.515220 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.515230 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.515238 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.518123 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.518137 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.518152 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.518377 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.518389 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.518398 2732 log.go:172] (0xc00045e320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.26.66:80/\nI0515 00:50:33.518445 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.518456 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.518469 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.524213 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.524226 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.524237 2732 log.go:172] (0xc000630dc0) (3) Data frame sent\nI0515 00:50:33.524765 2732 log.go:172] (0xc000a711e0) Data frame received for 3\nI0515 00:50:33.524777 2732 log.go:172] (0xc000630dc0) (3) Data frame handling\nI0515 00:50:33.524821 2732 log.go:172] (0xc000a711e0) Data frame received for 5\nI0515 00:50:33.524830 2732 log.go:172] (0xc00045e320) (5) Data frame handling\nI0515 00:50:33.526267 2732 log.go:172] (0xc000a711e0) Data frame received for 1\nI0515 00:50:33.526283 2732 log.go:172] (0xc000ba6320) (1) Data frame handling\nI0515 00:50:33.526298 2732 log.go:172] (0xc000ba6320) (1) Data frame sent\nI0515 00:50:33.526325 2732 log.go:172] (0xc000a711e0) (0xc000ba6320) Stream removed, broadcasting: 1\nI0515 00:50:33.526356 2732 log.go:172] (0xc000a711e0) Go away received\nI0515 00:50:33.526604 2732 log.go:172] (0xc000a711e0) (0xc000ba6320) Stream removed, broadcasting: 1\nI0515 00:50:33.526619 2732 log.go:172] (0xc000a711e0) (0xc000630dc0) Stream removed, broadcasting: 3\nI0515 00:50:33.526630 2732 log.go:172] (0xc000a711e0) (0xc00045e320) Stream removed, broadcasting: 5\n" May 15 00:50:33.530: INFO: stdout: "\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj\naffinity-clusterip-transition-48gpj" May 15 00:50:33.530: INFO: Received response from host: May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Received response from host: affinity-clusterip-transition-48gpj May 15 00:50:33.530: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8170, will wait for the garbage collector to delete the pods May 15 00:50:33.951: INFO: Deleting ReplicationController affinity-clusterip-transition took: 275.087118ms May 15 00:50:34.351: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.243839ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:50:45.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8170" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.368 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":183,"skipped":2957,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:50:45.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-60c2a62b-82f7-4d74-8496-bce8b65f1a80 STEP: Creating a pod to test consume secrets May 15 00:50:45.652: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca" in namespace "projected-2625" to be "Succeeded or Failed" May 15 00:50:45.689: INFO: Pod "pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca": Phase="Pending", Reason="", readiness=false. Elapsed: 37.376324ms May 15 00:50:47.693: INFO: Pod "pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041011209s May 15 00:50:49.697: INFO: Pod "pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045492894s STEP: Saw pod success May 15 00:50:49.697: INFO: Pod "pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca" satisfied condition "Succeeded or Failed" May 15 00:50:49.701: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca container projected-secret-volume-test: STEP: delete the pod May 15 00:50:49.964: INFO: Waiting for pod pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca to disappear May 15 00:50:50.071: INFO: Pod pod-projected-secrets-a5f7441a-60d4-474b-8cd5-9ea0ec8c8bca no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:50:50.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2625" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":184,"skipped":2959,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:50:50.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2843 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2843 to expose endpoints map[] May 15 00:50:50.221: INFO: Get endpoints failed (52.187306ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 15 00:50:51.224: INFO: successfully validated that service endpoint-test2 in namespace services-2843 exposes endpoints map[] (1.055553377s elapsed) STEP: Creating pod pod1 in namespace services-2843 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2843 to expose endpoints map[pod1:[80]] May 15 00:50:55.320: INFO: successfully validated that service endpoint-test2 in namespace services-2843 exposes endpoints map[pod1:[80]] (4.089034653s elapsed) STEP: Creating pod pod2 in namespace services-2843 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2843 to expose endpoints map[pod1:[80] pod2:[80]] May 15 00:50:58.647: INFO: successfully validated that service endpoint-test2 in namespace services-2843 exposes endpoints map[pod1:[80] pod2:[80]] (3.323259243s elapsed) STEP: Deleting pod pod1 in namespace services-2843 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2843 to expose endpoints map[pod2:[80]] May 15 00:50:59.734: INFO: successfully validated that service endpoint-test2 in namespace services-2843 exposes endpoints map[pod2:[80]] (1.08229055s elapsed) STEP: Deleting pod pod2 in namespace services-2843 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2843 to expose endpoints map[] May 15 00:50:59.784: INFO: successfully validated that service endpoint-test2 in namespace services-2843 exposes endpoints map[] (25.174121ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:50:59.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2843" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.829 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":185,"skipped":2995,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:50:59.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:50:59.959: INFO: Creating ReplicaSet my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1 May 15 00:50:59.990: INFO: Pod name my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1: Found 0 pods out of 1 May 15 00:51:04.996: INFO: Pod name my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1: Found 1 pods out of 1 May 15 00:51:04.996: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1" is running May 15 00:51:05.002: INFO: Pod "my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1-4lhfj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:51:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:51:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:51:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 00:50:59 +0000 UTC Reason: Message:}]) May 15 00:51:05.003: INFO: Trying to dial the pod May 15 00:51:10.013: INFO: Controller my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1: Got expected result from replica 1 [my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1-4lhfj]: "my-hostname-basic-6ddc2a21-e78c-4348-bb6d-843bfa431ec1-4lhfj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:51:10.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7009" for this suite. • [SLOW TEST:10.108 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":186,"skipped":2995,"failed":0} S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:51:10.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:51:17.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-210" for this suite. • [SLOW TEST:7.154 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":187,"skipped":2996,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:51:17.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-aebdb7f7-e3de-4a0a-acdc-211f876492b7 STEP: Creating a pod to test consume configMaps May 15 00:51:17.639: INFO: Waiting up to 5m0s for pod "pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5" in namespace "configmap-5981" to be "Succeeded or Failed" May 15 00:51:17.679: INFO: Pod "pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.010985ms May 15 00:51:19.689: INFO: Pod "pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049557347s May 15 00:51:21.700: INFO: Pod "pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061073869s STEP: Saw pod success May 15 00:51:21.701: INFO: Pod "pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5" satisfied condition "Succeeded or Failed" May 15 00:51:21.703: INFO: Trying to get logs from node latest-worker pod pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5 container configmap-volume-test: STEP: delete the pod May 15 00:51:21.735: INFO: Waiting for pod pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5 to disappear May 15 00:51:21.751: INFO: Pod pod-configmaps-240a16d4-fe09-474d-b5e4-23fa6a3f1ab5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:51:21.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5981" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":2996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:51:21.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 15 00:51:21.915: INFO: PodSpec: initContainers in spec.initContainers May 15 00:52:15.869: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a5fa497a-3baa-4242-9208-077b550bc237", GenerateName:"", Namespace:"init-container-2858", SelfLink:"/api/v1/namespaces/init-container-2858/pods/pod-init-a5fa497a-3baa-4242-9208-077b550bc237", UID:"1734cd79-b231-42b5-9db6-855ed408fd6f", ResourceVersion:"4687874", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725100681, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"915118463"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c8d480), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c8d4a0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c8d4c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c8d4e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xkt4g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0067b01c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xkt4g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xkt4g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xkt4g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0051b7558), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0008f2d20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0051b75e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0051b7600)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0051b7608), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0051b760c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100682, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100682, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100682, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100681, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.195", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.195"}}, StartTime:(*v1.Time)(0xc002c8d500), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002c8d580), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008f2e00)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://103dad5225d9823c5147a7b1e550383c4b6a3fd40ada4357526182f439486947", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c8d5c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c8d540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0051b768f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:52:15.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2858" for this suite. • [SLOW TEST:54.128 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":189,"skipped":3054,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:52:15.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-46786b8c-40f5-4ff4-bd6f-b212eccd55ab STEP: Creating a pod to test consume secrets May 15 00:52:16.111: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe" in namespace "projected-8211" to be "Succeeded or Failed" May 15 00:52:16.135: INFO: Pod "pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 24.571733ms May 15 00:52:18.139: INFO: Pod "pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028920392s May 15 00:52:20.144: INFO: Pod "pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe": Phase="Running", Reason="", readiness=true. Elapsed: 4.033818827s May 15 00:52:22.149: INFO: Pod "pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038679433s STEP: Saw pod success May 15 00:52:22.149: INFO: Pod "pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe" satisfied condition "Succeeded or Failed" May 15 00:52:22.152: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe container projected-secret-volume-test: STEP: delete the pod May 15 00:52:22.197: INFO: Waiting for pod pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe to disappear May 15 00:52:22.207: INFO: Pod pod-projected-secrets-9394bd5a-0741-44e4-b293-558c71528bbe no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:52:22.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8211" for this suite. • [SLOW TEST:6.327 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":190,"skipped":3059,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:52:22.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-55723e5d-e256-40cd-a272-4766c9f8ff06 STEP: Creating secret with name s-test-opt-upd-38e670e4-de60-489d-bbf5-7cc764f0e75f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-55723e5d-e256-40cd-a272-4766c9f8ff06 STEP: Updating secret s-test-opt-upd-38e670e4-de60-489d-bbf5-7cc764f0e75f STEP: Creating secret with name s-test-opt-create-a32afa75-28bd-4484-886b-b763e0c591bf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:52:32.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5012" for this suite. • [SLOW TEST:10.424 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":191,"skipped":3081,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:52:32.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-fa50beef-846b-4438-9ec4-ba59750d4461 STEP: Creating a pod to test consume configMaps May 15 00:52:33.724: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5" in namespace "configmap-5395" to be "Succeeded or Failed" May 15 00:52:33.808: INFO: Pod "pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5": Phase="Pending", Reason="", readiness=false. Elapsed: 84.409232ms May 15 00:52:35.947: INFO: Pod "pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223232338s May 15 00:52:38.066: INFO: Pod "pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.342316624s STEP: Saw pod success May 15 00:52:38.066: INFO: Pod "pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5" satisfied condition "Succeeded or Failed" May 15 00:52:38.070: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5 container configmap-volume-test: STEP: delete the pod May 15 00:52:38.154: INFO: Waiting for pod pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5 to disappear May 15 00:52:38.203: INFO: Pod pod-configmaps-b7ecf07f-e59c-49a9-9b95-db834bfa70f5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:52:38.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5395" for this suite. • [SLOW TEST:5.593 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":192,"skipped":3082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:52:38.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:52:38.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8840" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":193,"skipped":3127,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:52:38.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:52:39.136: INFO: Create a RollingUpdate DaemonSet May 15 00:52:39.152: INFO: Check that daemon pods launch on every node of the cluster May 15 00:52:39.172: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:39.258: INFO: Number of nodes with available pods: 0 May 15 00:52:39.258: INFO: Node latest-worker is running more than one daemon pod May 15 00:52:40.264: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:40.353: INFO: Number of nodes with available pods: 0 May 15 00:52:40.353: INFO: Node latest-worker is running more than one daemon pod May 15 00:52:41.348: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:41.351: INFO: Number of nodes with available pods: 0 May 15 00:52:41.351: INFO: Node latest-worker is running more than one daemon pod May 15 00:52:42.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:42.321: INFO: Number of nodes with available pods: 0 May 15 00:52:42.321: INFO: Node latest-worker is running more than one daemon pod May 15 00:52:43.261: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:43.264: INFO: Number of nodes with available pods: 0 May 15 00:52:43.264: INFO: Node latest-worker is running more than one daemon pod May 15 00:52:44.263: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:44.268: INFO: Number of nodes with available pods: 2 May 15 00:52:44.268: INFO: Number of running nodes: 2, number of available pods: 2 May 15 00:52:44.268: INFO: Update the DaemonSet to trigger a rollout May 15 00:52:44.275: INFO: Updating DaemonSet daemon-set May 15 00:52:55.381: INFO: Roll back the DaemonSet before rollout is complete May 15 00:52:55.411: INFO: Updating DaemonSet daemon-set May 15 00:52:55.411: INFO: Make sure DaemonSet rollback is complete May 15 00:52:55.429: INFO: Wrong image for pod: daemon-set-gbznf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 00:52:55.429: INFO: Pod daemon-set-gbznf is not available May 15 00:52:55.449: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:56.454: INFO: Wrong image for pod: daemon-set-gbznf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 00:52:56.454: INFO: Pod daemon-set-gbznf is not available May 15 00:52:56.458: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:57.532: INFO: Wrong image for pod: daemon-set-gbznf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 00:52:57.532: INFO: Pod daemon-set-gbznf is not available May 15 00:52:57.556: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 00:52:58.455: INFO: Pod daemon-set-q6d8m is not available May 15 00:52:58.460: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4049, will wait for the garbage collector to delete the pods May 15 00:52:58.531: INFO: Deleting DaemonSet.extensions daemon-set took: 7.500071ms May 15 00:52:58.931: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.308945ms May 15 00:53:04.963: INFO: Number of nodes with available pods: 0 May 15 00:53:04.963: INFO: Number of running nodes: 0, number of available pods: 0 May 15 00:53:04.966: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4049/daemonsets","resourceVersion":"4688248"},"items":null} May 15 00:53:04.969: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4049/pods","resourceVersion":"4688248"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:53:04.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4049" for this suite. • [SLOW TEST:26.326 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":194,"skipped":3130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:53:04.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:53:05.904: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:53:07.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 00:53:09.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725100785, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:53:12.982: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:53:23.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4482" for this suite. STEP: Destroying namespace "webhook-4482-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.372 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":195,"skipped":3156,"failed":0} S ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:53:23.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:53:23.427: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2a96ea2c-629e-44d3-b4b1-7149a43cb794" in namespace "security-context-test-196" to be "Succeeded or Failed" May 15 00:53:23.479: INFO: Pod "alpine-nnp-false-2a96ea2c-629e-44d3-b4b1-7149a43cb794": Phase="Pending", Reason="", readiness=false. Elapsed: 51.985674ms May 15 00:53:25.483: INFO: Pod "alpine-nnp-false-2a96ea2c-629e-44d3-b4b1-7149a43cb794": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056198511s May 15 00:53:27.486: INFO: Pod "alpine-nnp-false-2a96ea2c-629e-44d3-b4b1-7149a43cb794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058899974s May 15 00:53:27.486: INFO: Pod "alpine-nnp-false-2a96ea2c-629e-44d3-b4b1-7149a43cb794" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:53:27.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-196" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3157,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:53:27.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:53:31.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4877" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3169,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:53:31.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:05.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2605" for this suite. STEP: Destroying namespace "nsdeletetest-7746" for this suite. May 15 00:54:05.940: INFO: Namespace nsdeletetest-7746 was already deleted STEP: Destroying namespace "nsdeletetest-5183" for this suite. • [SLOW TEST:34.334 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":198,"skipped":3170,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:05.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:54:06.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 15 00:54:06.213: INFO: stderr: "" May 15 00:54:06.213: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:06.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7607" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":199,"skipped":3184,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:06.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-7cfc1a16-187b-4a97-8378-508ac860ef46 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:06.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7944" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":200,"skipped":3188,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:06.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:14.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-15" for this suite. • [SLOW TEST:8.080 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":201,"skipped":3193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:14.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:54:14.491: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 15 00:54:19.498: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 00:54:19.498: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 15 00:54:25.563: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2976 /apis/apps/v1/namespaces/deployment-2976/deployments/test-cleanup-deployment 37948b6b-a443-4c46-b352-15c454b3752d 4688820 1 2020-05-15 00:54:19 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-05-15 00:54:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 00:54:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e7cd88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-15 00:54:19 +0000 UTC,LastTransitionTime:2020-05-15 00:54:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-6688745694" has successfully progressed.,LastUpdateTime:2020-05-15 00:54:23 +0000 UTC,LastTransitionTime:2020-05-15 00:54:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 15 00:54:25.566: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-2976 /apis/apps/v1/namespaces/deployment-2976/replicasets/test-cleanup-deployment-6688745694 7139823f-2fdd-4cdb-9ae5-b51fb8ffab00 4688807 1 2020-05-15 00:54:19 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 37948b6b-a443-4c46-b352-15c454b3752d 0xc003e7d1c7 0xc003e7d1c8}] [] [{kube-controller-manager Update apps/v1 2020-05-15 00:54:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37948b6b-a443-4c46-b352-15c454b3752d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e7d258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 15 00:54:25.569: INFO: Pod "test-cleanup-deployment-6688745694-42tb2" is available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-42tb2 test-cleanup-deployment-6688745694- deployment-2976 /api/v1/namespaces/deployment-2976/pods/test-cleanup-deployment-6688745694-42tb2 a067b86b-2b2e-4fd3-b398-caed8e9ccf69 4688806 0 2020-05-15 00:54:19 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 7139823f-2fdd-4cdb-9ae5-b51fb8ffab00 0xc003e7d687 0xc003e7d688}] [] [{kube-controller-manager Update v1 2020-05-15 00:54:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7139823f-2fdd-4cdb-9ae5-b51fb8ffab00\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-15 00:54:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.240\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qhrzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qhrzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qhrzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:54:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:54:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:54:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 00:54:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.240,StartTime:2020-05-15 00:54:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 00:54:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://123de5724570688e1e1c3a53b9b7396f623fb01c1ce47f028200880d05df1814,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:25.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2976" for this suite. • [SLOW TEST:11.153 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":202,"skipped":3222,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:25.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-26541df0-daa8-4a03-b323-26f06ffe109f STEP: Creating a pod to test consume secrets May 15 00:54:25.792: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1" in namespace "projected-7466" to be "Succeeded or Failed" May 15 00:54:25.814: INFO: Pod "pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.711583ms May 15 00:54:27.819: INFO: Pod "pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027084429s May 15 00:54:29.823: INFO: Pod "pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031496858s STEP: Saw pod success May 15 00:54:29.823: INFO: Pod "pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1" satisfied condition "Succeeded or Failed" May 15 00:54:29.827: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1 container projected-secret-volume-test: STEP: delete the pod May 15 00:54:29.873: INFO: Waiting for pod pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1 to disappear May 15 00:54:29.893: INFO: Pod pod-projected-secrets-4d58adbf-855c-4d14-893c-9bdae0bc57a1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:29.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7466" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3223,"failed":0} S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:29.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 15 00:54:34.520: INFO: Successfully updated pod "pod-update-731e3e63-6b9d-43a4-b7da-b63171eddd8f" STEP: verifying the updated pod is in kubernetes May 15 00:54:34.583: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:34.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-79" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:34.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 00:54:39.744: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:39.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9601" for this suite. • [SLOW TEST:5.295 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3258,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:39.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-c379e9ef-1290-4997-bcb4-d43fcf83832c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:40.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3214" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":206,"skipped":3264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:40.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 15 00:54:40.359: INFO: Waiting up to 5m0s for pod "pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b" in namespace "emptydir-296" to be "Succeeded or Failed" May 15 00:54:40.387: INFO: Pod "pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.471816ms May 15 00:54:42.413: INFO: Pod "pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053083679s May 15 00:54:44.417: INFO: Pod "pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057745408s STEP: Saw pod success May 15 00:54:44.417: INFO: Pod "pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b" satisfied condition "Succeeded or Failed" May 15 00:54:44.421: INFO: Trying to get logs from node latest-worker2 pod pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b container test-container: STEP: delete the pod May 15 00:54:44.499: INFO: Waiting for pod pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b to disappear May 15 00:54:44.516: INFO: Pod pod-ccaec7d3-78f9-44cd-bf63-97726eb26e6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:54:44.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-296" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3319,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:54:44.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5065 May 15 00:54:48.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 15 00:54:48.830: INFO: stderr: "I0515 00:54:48.751446 2768 log.go:172] (0xc00061b970) (0xc000a7c8c0) Create stream\nI0515 00:54:48.751491 2768 log.go:172] (0xc00061b970) (0xc000a7c8c0) Stream added, broadcasting: 1\nI0515 00:54:48.753641 2768 log.go:172] (0xc00061b970) Reply frame received for 1\nI0515 00:54:48.753673 2768 log.go:172] (0xc00061b970) (0xc000a7c960) Create stream\nI0515 00:54:48.753681 2768 log.go:172] (0xc00061b970) (0xc000a7c960) Stream added, broadcasting: 3\nI0515 00:54:48.754453 2768 log.go:172] (0xc00061b970) Reply frame received for 3\nI0515 00:54:48.754480 2768 log.go:172] (0xc00061b970) (0xc00039ee60) Create stream\nI0515 00:54:48.754490 2768 log.go:172] (0xc00061b970) (0xc00039ee60) Stream added, broadcasting: 5\nI0515 00:54:48.755299 2768 log.go:172] (0xc00061b970) Reply frame received for 5\nI0515 00:54:48.814358 2768 log.go:172] (0xc00061b970) Data frame received for 5\nI0515 00:54:48.814378 2768 log.go:172] (0xc00039ee60) (5) Data frame handling\nI0515 00:54:48.814388 2768 log.go:172] (0xc00039ee60) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0515 00:54:48.822939 2768 log.go:172] (0xc00061b970) Data frame received for 3\nI0515 00:54:48.822950 2768 log.go:172] (0xc000a7c960) (3) Data frame handling\nI0515 00:54:48.822956 2768 log.go:172] (0xc000a7c960) (3) Data frame sent\nI0515 00:54:48.823791 2768 log.go:172] (0xc00061b970) Data frame received for 3\nI0515 00:54:48.823810 2768 log.go:172] (0xc000a7c960) (3) Data frame handling\nI0515 00:54:48.823846 2768 log.go:172] (0xc00061b970) Data frame received for 5\nI0515 00:54:48.823871 2768 log.go:172] (0xc00039ee60) (5) Data frame handling\nI0515 00:54:48.825431 2768 log.go:172] (0xc00061b970) Data frame received for 1\nI0515 00:54:48.825452 2768 log.go:172] (0xc000a7c8c0) (1) Data frame handling\nI0515 00:54:48.825479 2768 log.go:172] (0xc000a7c8c0) (1) Data frame sent\nI0515 00:54:48.825501 2768 log.go:172] (0xc00061b970) (0xc000a7c8c0) Stream removed, broadcasting: 1\nI0515 00:54:48.825521 2768 log.go:172] (0xc00061b970) Go away received\nI0515 00:54:48.826058 2768 log.go:172] (0xc00061b970) (0xc000a7c8c0) Stream removed, broadcasting: 1\nI0515 00:54:48.826092 2768 log.go:172] (0xc00061b970) (0xc000a7c960) Stream removed, broadcasting: 3\nI0515 00:54:48.826113 2768 log.go:172] (0xc00061b970) (0xc00039ee60) Stream removed, broadcasting: 5\n" May 15 00:54:48.830: INFO: stdout: "iptables" May 15 00:54:48.831: INFO: proxyMode: iptables May 15 00:54:48.834: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:54:48.839: INFO: Pod kube-proxy-mode-detector still exists May 15 00:54:50.839: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:54:50.856: INFO: Pod kube-proxy-mode-detector still exists May 15 00:54:52.839: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 15 00:54:52.862: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-5065 STEP: creating replication controller affinity-nodeport-timeout in namespace services-5065 I0515 00:54:52.913008 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5065, replica count: 3 I0515 00:54:55.963459 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:54:58.963695 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:54:58.972: INFO: Creating new exec pod May 15 00:55:03.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 15 00:55:04.233: INFO: stderr: "I0515 00:55:04.128855 2788 log.go:172] (0xc0009808f0) (0xc000a585a0) Create stream\nI0515 00:55:04.128992 2788 log.go:172] (0xc0009808f0) (0xc000a585a0) Stream added, broadcasting: 1\nI0515 00:55:04.133612 2788 log.go:172] (0xc0009808f0) Reply frame received for 1\nI0515 00:55:04.133662 2788 log.go:172] (0xc0009808f0) (0xc000810d20) Create stream\nI0515 00:55:04.133684 2788 log.go:172] (0xc0009808f0) (0xc000810d20) Stream added, broadcasting: 3\nI0515 00:55:04.134735 2788 log.go:172] (0xc0009808f0) Reply frame received for 3\nI0515 00:55:04.134829 2788 log.go:172] (0xc0009808f0) (0xc000811cc0) Create stream\nI0515 00:55:04.134857 2788 log.go:172] (0xc0009808f0) (0xc000811cc0) Stream added, broadcasting: 5\nI0515 00:55:04.135797 2788 log.go:172] (0xc0009808f0) Reply frame received for 5\nI0515 00:55:04.227722 2788 log.go:172] (0xc0009808f0) Data frame received for 5\nI0515 00:55:04.227756 2788 log.go:172] (0xc000811cc0) (5) Data frame handling\nI0515 00:55:04.227776 2788 log.go:172] (0xc000811cc0) (5) Data frame sent\nI0515 00:55:04.227785 2788 log.go:172] (0xc0009808f0) Data frame received for 5\nI0515 00:55:04.227792 2788 log.go:172] (0xc000811cc0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0515 00:55:04.228576 2788 log.go:172] (0xc0009808f0) Data frame received for 3\nI0515 00:55:04.228595 2788 log.go:172] (0xc000810d20) (3) Data frame handling\nI0515 00:55:04.228776 2788 log.go:172] (0xc0009808f0) Data frame received for 1\nI0515 00:55:04.228795 2788 log.go:172] (0xc000a585a0) (1) Data frame handling\nI0515 00:55:04.228805 2788 log.go:172] (0xc000a585a0) (1) Data frame sent\nI0515 00:55:04.228819 2788 log.go:172] (0xc0009808f0) (0xc000a585a0) Stream removed, broadcasting: 1\nI0515 00:55:04.228837 2788 log.go:172] (0xc0009808f0) Go away received\nI0515 00:55:04.229322 2788 log.go:172] (0xc0009808f0) (0xc000a585a0) Stream removed, broadcasting: 1\nI0515 00:55:04.229340 2788 log.go:172] (0xc0009808f0) (0xc000810d20) Stream removed, broadcasting: 3\nI0515 00:55:04.229349 2788 log.go:172] (0xc0009808f0) (0xc000811cc0) Stream removed, broadcasting: 5\n" May 15 00:55:04.233: INFO: stdout: "" May 15 00:55:04.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c nc -zv -t -w 2 10.101.178.202 80' May 15 00:55:04.433: INFO: stderr: "I0515 00:55:04.361889 2807 log.go:172] (0xc000aba840) (0xc0000eb900) Create stream\nI0515 00:55:04.361945 2807 log.go:172] (0xc000aba840) (0xc0000eb900) Stream added, broadcasting: 1\nI0515 00:55:04.364088 2807 log.go:172] (0xc000aba840) Reply frame received for 1\nI0515 00:55:04.364144 2807 log.go:172] (0xc000aba840) (0xc0001401e0) Create stream\nI0515 00:55:04.364162 2807 log.go:172] (0xc000aba840) (0xc0001401e0) Stream added, broadcasting: 3\nI0515 00:55:04.365057 2807 log.go:172] (0xc000aba840) Reply frame received for 3\nI0515 00:55:04.365094 2807 log.go:172] (0xc000aba840) (0xc000246aa0) Create stream\nI0515 00:55:04.365106 2807 log.go:172] (0xc000aba840) (0xc000246aa0) Stream added, broadcasting: 5\nI0515 00:55:04.366299 2807 log.go:172] (0xc000aba840) Reply frame received for 5\nI0515 00:55:04.425759 2807 log.go:172] (0xc000aba840) Data frame received for 3\nI0515 00:55:04.425797 2807 log.go:172] (0xc0001401e0) (3) Data frame handling\nI0515 00:55:04.425829 2807 log.go:172] (0xc000aba840) Data frame received for 5\nI0515 00:55:04.425844 2807 log.go:172] (0xc000246aa0) (5) Data frame handling\nI0515 00:55:04.425861 2807 log.go:172] (0xc000246aa0) (5) Data frame sent\nI0515 00:55:04.425873 2807 log.go:172] (0xc000aba840) Data frame received for 5\nI0515 00:55:04.425884 2807 log.go:172] (0xc000246aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.178.202 80\nConnection to 10.101.178.202 80 port [tcp/http] succeeded!\nI0515 00:55:04.425908 2807 log.go:172] (0xc000246aa0) (5) Data frame sent\nI0515 00:55:04.425921 2807 log.go:172] (0xc000aba840) Data frame received for 5\nI0515 00:55:04.425931 2807 log.go:172] (0xc000246aa0) (5) Data frame handling\nI0515 00:55:04.427345 2807 log.go:172] (0xc000aba840) Data frame received for 1\nI0515 00:55:04.427369 2807 log.go:172] (0xc0000eb900) (1) Data frame handling\nI0515 00:55:04.427378 2807 log.go:172] (0xc0000eb900) (1) Data frame sent\nI0515 00:55:04.427392 2807 log.go:172] (0xc000aba840) (0xc0000eb900) Stream removed, broadcasting: 1\nI0515 00:55:04.427417 2807 log.go:172] (0xc000aba840) Go away received\nI0515 00:55:04.427774 2807 log.go:172] (0xc000aba840) (0xc0000eb900) Stream removed, broadcasting: 1\nI0515 00:55:04.427797 2807 log.go:172] (0xc000aba840) (0xc0001401e0) Stream removed, broadcasting: 3\nI0515 00:55:04.427808 2807 log.go:172] (0xc000aba840) (0xc000246aa0) Stream removed, broadcasting: 5\n" May 15 00:55:04.433: INFO: stdout: "" May 15 00:55:04.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30929' May 15 00:55:04.648: INFO: stderr: "I0515 00:55:04.574194 2830 log.go:172] (0xc000b174a0) (0xc000679ea0) Create stream\nI0515 00:55:04.574252 2830 log.go:172] (0xc000b174a0) (0xc000679ea0) Stream added, broadcasting: 1\nI0515 00:55:04.578793 2830 log.go:172] (0xc000b174a0) Reply frame received for 1\nI0515 00:55:04.578836 2830 log.go:172] (0xc000b174a0) (0xc00063ea00) Create stream\nI0515 00:55:04.578846 2830 log.go:172] (0xc000b174a0) (0xc00063ea00) Stream added, broadcasting: 3\nI0515 00:55:04.579718 2830 log.go:172] (0xc000b174a0) Reply frame received for 3\nI0515 00:55:04.579746 2830 log.go:172] (0xc000b174a0) (0xc00051e500) Create stream\nI0515 00:55:04.579760 2830 log.go:172] (0xc000b174a0) (0xc00051e500) Stream added, broadcasting: 5\nI0515 00:55:04.580559 2830 log.go:172] (0xc000b174a0) Reply frame received for 5\nI0515 00:55:04.639867 2830 log.go:172] (0xc000b174a0) Data frame received for 5\nI0515 00:55:04.639898 2830 log.go:172] (0xc00051e500) (5) Data frame handling\nI0515 00:55:04.639925 2830 log.go:172] (0xc00051e500) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30929\nI0515 00:55:04.640331 2830 log.go:172] (0xc000b174a0) Data frame received for 5\nI0515 00:55:04.640356 2830 log.go:172] (0xc00051e500) (5) Data frame handling\nI0515 00:55:04.640377 2830 log.go:172] (0xc00051e500) (5) Data frame sent\nConnection to 172.17.0.13 30929 port [tcp/30929] succeeded!\nI0515 00:55:04.640921 2830 log.go:172] (0xc000b174a0) Data frame received for 5\nI0515 00:55:04.640939 2830 log.go:172] (0xc00051e500) (5) Data frame handling\nI0515 00:55:04.641286 2830 log.go:172] (0xc000b174a0) Data frame received for 3\nI0515 00:55:04.641313 2830 log.go:172] (0xc00063ea00) (3) Data frame handling\nI0515 00:55:04.642968 2830 log.go:172] (0xc000b174a0) Data frame received for 1\nI0515 00:55:04.642987 2830 log.go:172] (0xc000679ea0) (1) Data frame handling\nI0515 00:55:04.643002 2830 log.go:172] (0xc000679ea0) (1) Data frame sent\nI0515 00:55:04.643019 2830 log.go:172] (0xc000b174a0) (0xc000679ea0) Stream removed, broadcasting: 1\nI0515 00:55:04.643052 2830 log.go:172] (0xc000b174a0) Go away received\nI0515 00:55:04.643341 2830 log.go:172] (0xc000b174a0) (0xc000679ea0) Stream removed, broadcasting: 1\nI0515 00:55:04.643355 2830 log.go:172] (0xc000b174a0) (0xc00063ea00) Stream removed, broadcasting: 3\nI0515 00:55:04.643363 2830 log.go:172] (0xc000b174a0) (0xc00051e500) Stream removed, broadcasting: 5\n" May 15 00:55:04.648: INFO: stdout: "" May 15 00:55:04.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30929' May 15 00:55:04.942: INFO: stderr: "I0515 00:55:04.789106 2849 log.go:172] (0xc00003a370) (0xc0004dcdc0) Create stream\nI0515 00:55:04.789278 2849 log.go:172] (0xc00003a370) (0xc0004dcdc0) Stream added, broadcasting: 1\nI0515 00:55:04.791395 2849 log.go:172] (0xc00003a370) Reply frame received for 1\nI0515 00:55:04.791443 2849 log.go:172] (0xc00003a370) (0xc00074c640) Create stream\nI0515 00:55:04.791468 2849 log.go:172] (0xc00003a370) (0xc00074c640) Stream added, broadcasting: 3\nI0515 00:55:04.792508 2849 log.go:172] (0xc00003a370) Reply frame received for 3\nI0515 00:55:04.792577 2849 log.go:172] (0xc00003a370) (0xc000a12000) Create stream\nI0515 00:55:04.792611 2849 log.go:172] (0xc00003a370) (0xc000a12000) Stream added, broadcasting: 5\nI0515 00:55:04.793800 2849 log.go:172] (0xc00003a370) Reply frame received for 5\nI0515 00:55:04.934747 2849 log.go:172] (0xc00003a370) Data frame received for 3\nI0515 00:55:04.934784 2849 log.go:172] (0xc00074c640) (3) Data frame handling\nI0515 00:55:04.934811 2849 log.go:172] (0xc00003a370) Data frame received for 5\nI0515 00:55:04.934826 2849 log.go:172] (0xc000a12000) (5) Data frame handling\nI0515 00:55:04.934842 2849 log.go:172] (0xc000a12000) (5) Data frame sent\nI0515 00:55:04.934854 2849 log.go:172] (0xc00003a370) Data frame received for 5\nI0515 00:55:04.934865 2849 log.go:172] (0xc000a12000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30929\nConnection to 172.17.0.12 30929 port [tcp/30929] succeeded!\nI0515 00:55:04.936145 2849 log.go:172] (0xc00003a370) Data frame received for 1\nI0515 00:55:04.936208 2849 log.go:172] (0xc0004dcdc0) (1) Data frame handling\nI0515 00:55:04.936232 2849 log.go:172] (0xc0004dcdc0) (1) Data frame sent\nI0515 00:55:04.936245 2849 log.go:172] (0xc00003a370) (0xc0004dcdc0) Stream removed, broadcasting: 1\nI0515 00:55:04.936262 2849 log.go:172] (0xc00003a370) Go away received\nI0515 00:55:04.936639 2849 log.go:172] (0xc00003a370) (0xc0004dcdc0) Stream removed, broadcasting: 1\nI0515 00:55:04.936658 2849 log.go:172] (0xc00003a370) (0xc00074c640) Stream removed, broadcasting: 3\nI0515 00:55:04.936667 2849 log.go:172] (0xc00003a370) (0xc000a12000) Stream removed, broadcasting: 5\n" May 15 00:55:04.942: INFO: stdout: "" May 15 00:55:04.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30929/ ; done' May 15 00:55:05.226: INFO: stderr: "I0515 00:55:05.065379 2869 log.go:172] (0xc00062ad10) (0xc000c32460) Create stream\nI0515 00:55:05.065453 2869 log.go:172] (0xc00062ad10) (0xc000c32460) Stream added, broadcasting: 1\nI0515 00:55:05.067661 2869 log.go:172] (0xc00062ad10) Reply frame received for 1\nI0515 00:55:05.067708 2869 log.go:172] (0xc00062ad10) (0xc0006fefa0) Create stream\nI0515 00:55:05.067729 2869 log.go:172] (0xc00062ad10) (0xc0006fefa0) Stream added, broadcasting: 3\nI0515 00:55:05.068632 2869 log.go:172] (0xc00062ad10) Reply frame received for 3\nI0515 00:55:05.068673 2869 log.go:172] (0xc00062ad10) (0xc000c445a0) Create stream\nI0515 00:55:05.068684 2869 log.go:172] (0xc00062ad10) (0xc000c445a0) Stream added, broadcasting: 5\nI0515 00:55:05.069689 2869 log.go:172] (0xc00062ad10) Reply frame received for 5\nI0515 00:55:05.138390 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.138432 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.138454 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.138494 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.138520 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.138552 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.141664 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.141698 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.141729 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.141983 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.142002 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.142016 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.142047 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.142066 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.142080 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.146378 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.146408 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.146433 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.146878 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.146905 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.146915 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.146930 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.146941 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.146951 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.150525 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.150557 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.150579 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.150889 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.150910 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.150921 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.150936 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.150945 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.150954 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.158731 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.158753 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.158771 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.159648 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.159682 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.159695 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.159714 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.159725 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.159737 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.162865 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.162890 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.162911 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.163199 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.163233 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.163249 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.163275 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.163295 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.163312 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.168971 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.168991 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.169009 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.169730 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.169749 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.169762 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.169781 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.169808 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.169825 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.174831 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.174854 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.174879 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.175299 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.175319 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.175336 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.175352 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.175380 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.175419 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.179222 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.179251 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.179289 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.179834 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.179868 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.179897 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\nI0515 00:55:05.179910 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.179920 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.179972 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\nI0515 00:55:05.180077 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.180097 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.180114 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.182968 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.183003 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.183038 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.183407 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.183444 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.183484 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.183518 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.183543 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.183580 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.188343 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.188387 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.188419 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.188487 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.188507 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.188540 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\nI0515 00:55:05.188557 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.188568 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.188585 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.188652 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.188666 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.188680 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\nI0515 00:55:05.192263 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.192281 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.192296 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.193107 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.193275 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.193285 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\nI0515 00:55:05.193294 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.193326 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.193349 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.193384 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.193396 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.193415 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\nI0515 00:55:05.197557 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.197576 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.197596 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.197850 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.197872 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.197884 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.197901 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.197910 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.197919 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.201384 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.201406 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.201443 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.201833 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.201869 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.201883 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.201899 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.201908 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.201917 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.205671 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.205698 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.205726 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.206214 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.206253 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.206265 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.206282 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.206295 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.206306 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.210524 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.210562 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.210598 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.210908 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.210925 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.210934 2869 log.go:172] (0xc000c445a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.211015 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.211049 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.211072 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.219479 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.219500 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.219514 2869 log.go:172] (0xc0006fefa0) (3) Data frame sent\nI0515 00:55:05.220264 2869 log.go:172] (0xc00062ad10) Data frame received for 5\nI0515 00:55:05.220289 2869 log.go:172] (0xc000c445a0) (5) Data frame handling\nI0515 00:55:05.220313 2869 log.go:172] (0xc00062ad10) Data frame received for 3\nI0515 00:55:05.220330 2869 log.go:172] (0xc0006fefa0) (3) Data frame handling\nI0515 00:55:05.222064 2869 log.go:172] (0xc00062ad10) Data frame received for 1\nI0515 00:55:05.222082 2869 log.go:172] (0xc000c32460) (1) Data frame handling\nI0515 00:55:05.222092 2869 log.go:172] (0xc000c32460) (1) Data frame sent\nI0515 00:55:05.222102 2869 log.go:172] (0xc00062ad10) (0xc000c32460) Stream removed, broadcasting: 1\nI0515 00:55:05.222116 2869 log.go:172] (0xc00062ad10) Go away received\nI0515 00:55:05.222369 2869 log.go:172] (0xc00062ad10) (0xc000c32460) Stream removed, broadcasting: 1\nI0515 00:55:05.222381 2869 log.go:172] (0xc00062ad10) (0xc0006fefa0) Stream removed, broadcasting: 3\nI0515 00:55:05.222387 2869 log.go:172] (0xc00062ad10) (0xc000c445a0) Stream removed, broadcasting: 5\n" May 15 00:55:05.227: INFO: stdout: "\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx\naffinity-nodeport-timeout-rlwnx" May 15 00:55:05.227: INFO: Received response from host: May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Received response from host: affinity-nodeport-timeout-rlwnx May 15 00:55:05.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30929/' May 15 00:55:05.436: INFO: stderr: "I0515 00:55:05.348714 2889 log.go:172] (0xc0009e62c0) (0xc000b750e0) Create stream\nI0515 00:55:05.348781 2889 log.go:172] (0xc0009e62c0) (0xc000b750e0) Stream added, broadcasting: 1\nI0515 00:55:05.352023 2889 log.go:172] (0xc0009e62c0) Reply frame received for 1\nI0515 00:55:05.352055 2889 log.go:172] (0xc0009e62c0) (0xc000ab23c0) Create stream\nI0515 00:55:05.352062 2889 log.go:172] (0xc0009e62c0) (0xc000ab23c0) Stream added, broadcasting: 3\nI0515 00:55:05.352880 2889 log.go:172] (0xc0009e62c0) Reply frame received for 3\nI0515 00:55:05.352902 2889 log.go:172] (0xc0009e62c0) (0xc00084d9a0) Create stream\nI0515 00:55:05.352908 2889 log.go:172] (0xc0009e62c0) (0xc00084d9a0) Stream added, broadcasting: 5\nI0515 00:55:05.353913 2889 log.go:172] (0xc0009e62c0) Reply frame received for 5\nI0515 00:55:05.425341 2889 log.go:172] (0xc0009e62c0) Data frame received for 5\nI0515 00:55:05.425377 2889 log.go:172] (0xc00084d9a0) (5) Data frame handling\nI0515 00:55:05.425411 2889 log.go:172] (0xc00084d9a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:05.428675 2889 log.go:172] (0xc0009e62c0) Data frame received for 3\nI0515 00:55:05.428713 2889 log.go:172] (0xc000ab23c0) (3) Data frame handling\nI0515 00:55:05.428747 2889 log.go:172] (0xc000ab23c0) (3) Data frame sent\nI0515 00:55:05.429241 2889 log.go:172] (0xc0009e62c0) Data frame received for 3\nI0515 00:55:05.429317 2889 log.go:172] (0xc000ab23c0) (3) Data frame handling\nI0515 00:55:05.429404 2889 log.go:172] (0xc0009e62c0) Data frame received for 5\nI0515 00:55:05.429417 2889 log.go:172] (0xc00084d9a0) (5) Data frame handling\nI0515 00:55:05.430537 2889 log.go:172] (0xc0009e62c0) Data frame received for 1\nI0515 00:55:05.430562 2889 log.go:172] (0xc000b750e0) (1) Data frame handling\nI0515 00:55:05.430573 2889 log.go:172] (0xc000b750e0) (1) Data frame sent\nI0515 00:55:05.430582 2889 log.go:172] (0xc0009e62c0) (0xc000b750e0) Stream removed, broadcasting: 1\nI0515 00:55:05.430597 2889 log.go:172] (0xc0009e62c0) Go away received\nI0515 00:55:05.431082 2889 log.go:172] (0xc0009e62c0) (0xc000b750e0) Stream removed, broadcasting: 1\nI0515 00:55:05.431105 2889 log.go:172] (0xc0009e62c0) (0xc000ab23c0) Stream removed, broadcasting: 3\nI0515 00:55:05.431118 2889 log.go:172] (0xc0009e62c0) (0xc00084d9a0) Stream removed, broadcasting: 5\n" May 15 00:55:05.436: INFO: stdout: "affinity-nodeport-timeout-rlwnx" May 15 00:55:20.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30929/' May 15 00:55:20.667: INFO: stderr: "I0515 00:55:20.567236 2909 log.go:172] (0xc0000e6370) (0xc000552140) Create stream\nI0515 00:55:20.567276 2909 log.go:172] (0xc0000e6370) (0xc000552140) Stream added, broadcasting: 1\nI0515 00:55:20.568652 2909 log.go:172] (0xc0000e6370) Reply frame received for 1\nI0515 00:55:20.568690 2909 log.go:172] (0xc0000e6370) (0xc0004fec80) Create stream\nI0515 00:55:20.568703 2909 log.go:172] (0xc0000e6370) (0xc0004fec80) Stream added, broadcasting: 3\nI0515 00:55:20.569746 2909 log.go:172] (0xc0000e6370) Reply frame received for 3\nI0515 00:55:20.569768 2909 log.go:172] (0xc0000e6370) (0xc00058c3c0) Create stream\nI0515 00:55:20.569775 2909 log.go:172] (0xc0000e6370) (0xc00058c3c0) Stream added, broadcasting: 5\nI0515 00:55:20.570558 2909 log.go:172] (0xc0000e6370) Reply frame received for 5\nI0515 00:55:20.659568 2909 log.go:172] (0xc0000e6370) Data frame received for 5\nI0515 00:55:20.659601 2909 log.go:172] (0xc00058c3c0) (5) Data frame handling\nI0515 00:55:20.659626 2909 log.go:172] (0xc00058c3c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:20.660629 2909 log.go:172] (0xc0000e6370) Data frame received for 3\nI0515 00:55:20.660696 2909 log.go:172] (0xc0004fec80) (3) Data frame handling\nI0515 00:55:20.660717 2909 log.go:172] (0xc0004fec80) (3) Data frame sent\nI0515 00:55:20.661358 2909 log.go:172] (0xc0000e6370) Data frame received for 5\nI0515 00:55:20.661377 2909 log.go:172] (0xc00058c3c0) (5) Data frame handling\nI0515 00:55:20.661637 2909 log.go:172] (0xc0000e6370) Data frame received for 3\nI0515 00:55:20.661653 2909 log.go:172] (0xc0004fec80) (3) Data frame handling\nI0515 00:55:20.663198 2909 log.go:172] (0xc0000e6370) Data frame received for 1\nI0515 00:55:20.663236 2909 log.go:172] (0xc000552140) (1) Data frame handling\nI0515 00:55:20.663254 2909 log.go:172] (0xc000552140) (1) Data frame sent\nI0515 00:55:20.663275 2909 log.go:172] (0xc0000e6370) (0xc000552140) Stream removed, broadcasting: 1\nI0515 00:55:20.663313 2909 log.go:172] (0xc0000e6370) Go away received\nI0515 00:55:20.663751 2909 log.go:172] (0xc0000e6370) (0xc000552140) Stream removed, broadcasting: 1\nI0515 00:55:20.663772 2909 log.go:172] (0xc0000e6370) (0xc0004fec80) Stream removed, broadcasting: 3\nI0515 00:55:20.663783 2909 log.go:172] (0xc0000e6370) (0xc00058c3c0) Stream removed, broadcasting: 5\n" May 15 00:55:20.668: INFO: stdout: "affinity-nodeport-timeout-rlwnx" May 15 00:55:35.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5065 execpod-affinitylq47g -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30929/' May 15 00:55:35.930: INFO: stderr: "I0515 00:55:35.813528 2929 log.go:172] (0xc000aaed10) (0xc0006b1c20) Create stream\nI0515 00:55:35.813598 2929 log.go:172] (0xc000aaed10) (0xc0006b1c20) Stream added, broadcasting: 1\nI0515 00:55:35.818678 2929 log.go:172] (0xc000aaed10) Reply frame received for 1\nI0515 00:55:35.818717 2929 log.go:172] (0xc000aaed10) (0xc0006a0dc0) Create stream\nI0515 00:55:35.818728 2929 log.go:172] (0xc000aaed10) (0xc0006a0dc0) Stream added, broadcasting: 3\nI0515 00:55:35.819641 2929 log.go:172] (0xc000aaed10) Reply frame received for 3\nI0515 00:55:35.819728 2929 log.go:172] (0xc000aaed10) (0xc00052a140) Create stream\nI0515 00:55:35.819764 2929 log.go:172] (0xc000aaed10) (0xc00052a140) Stream added, broadcasting: 5\nI0515 00:55:35.820595 2929 log.go:172] (0xc000aaed10) Reply frame received for 5\nI0515 00:55:35.916623 2929 log.go:172] (0xc000aaed10) Data frame received for 5\nI0515 00:55:35.916651 2929 log.go:172] (0xc00052a140) (5) Data frame handling\nI0515 00:55:35.916666 2929 log.go:172] (0xc00052a140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30929/\nI0515 00:55:35.923178 2929 log.go:172] (0xc000aaed10) Data frame received for 5\nI0515 00:55:35.923220 2929 log.go:172] (0xc000aaed10) Data frame received for 3\nI0515 00:55:35.923268 2929 log.go:172] (0xc0006a0dc0) (3) Data frame handling\nI0515 00:55:35.923284 2929 log.go:172] (0xc0006a0dc0) (3) Data frame sent\nI0515 00:55:35.923296 2929 log.go:172] (0xc000aaed10) Data frame received for 3\nI0515 00:55:35.923305 2929 log.go:172] (0xc0006a0dc0) (3) Data frame handling\nI0515 00:55:35.923347 2929 log.go:172] (0xc00052a140) (5) Data frame handling\nI0515 00:55:35.924700 2929 log.go:172] (0xc000aaed10) Data frame received for 1\nI0515 00:55:35.924718 2929 log.go:172] (0xc0006b1c20) (1) Data frame handling\nI0515 00:55:35.924746 2929 log.go:172] (0xc0006b1c20) (1) Data frame sent\nI0515 00:55:35.924762 2929 log.go:172] (0xc000aaed10) (0xc0006b1c20) Stream removed, broadcasting: 1\nI0515 00:55:35.924780 2929 log.go:172] (0xc000aaed10) Go away received\nI0515 00:55:35.925491 2929 log.go:172] (0xc000aaed10) (0xc0006b1c20) Stream removed, broadcasting: 1\nI0515 00:55:35.925511 2929 log.go:172] (0xc000aaed10) (0xc0006a0dc0) Stream removed, broadcasting: 3\nI0515 00:55:35.925521 2929 log.go:172] (0xc000aaed10) (0xc00052a140) Stream removed, broadcasting: 5\n" May 15 00:55:35.930: INFO: stdout: "affinity-nodeport-timeout-jdvs4" May 15 00:55:35.930: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5065, will wait for the garbage collector to delete the pods May 15 00:55:36.774: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 613.140423ms May 15 00:55:37.174: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.236773ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:55:45.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5065" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:60.502 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":208,"skipped":3333,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:55:45.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:55:45.194: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Pending, waiting for it to be Running (with Ready = true) May 15 00:55:47.215: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Pending, waiting for it to be Running (with Ready = true) May 15 00:55:49.199: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:55:51.198: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:55:53.221: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:55:55.221: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:55:57.199: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:55:59.198: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:56:01.199: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:56:03.199: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:56:05.198: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:56:07.199: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:56:09.198: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = false) May 15 00:56:11.198: INFO: The status of Pod test-webserver-ad40821f-71ca-436e-9db9-2410d8a2a6f9 is Running (Ready = true) May 15 00:56:11.201: INFO: Container started at 2020-05-15 00:55:48 +0000 UTC, pod became ready at 2020-05-15 00:56:09 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:11.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5289" for this suite. • [SLOW TEST:26.161 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":209,"skipped":3338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:11.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 15 00:56:19.592: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 00:56:19.601: INFO: Pod pod-with-prestop-http-hook still exists May 15 00:56:21.602: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 00:56:21.617: INFO: Pod pod-with-prestop-http-hook still exists May 15 00:56:23.602: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 00:56:23.606: INFO: Pod pod-with-prestop-http-hook still exists May 15 00:56:25.602: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 00:56:25.607: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:25.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1959" for this suite. • [SLOW TEST:14.420 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3368,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:25.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 15 00:56:25.772: INFO: Waiting up to 5m0s for pod "pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e" in namespace "emptydir-7298" to be "Succeeded or Failed" May 15 00:56:25.795: INFO: Pod "pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.956343ms May 15 00:56:27.799: INFO: Pod "pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027096721s May 15 00:56:29.803: INFO: Pod "pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e": Phase="Running", Reason="", readiness=true. Elapsed: 4.030992385s May 15 00:56:31.807: INFO: Pod "pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034888448s STEP: Saw pod success May 15 00:56:31.807: INFO: Pod "pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e" satisfied condition "Succeeded or Failed" May 15 00:56:31.810: INFO: Trying to get logs from node latest-worker pod pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e container test-container: STEP: delete the pod May 15 00:56:32.130: INFO: Waiting for pod pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e to disappear May 15 00:56:32.167: INFO: Pod pod-91cb4568-9b1e-486e-a5a0-9b2d94bb1b4e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:32.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7298" for this suite. • [SLOW TEST:6.543 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":211,"skipped":3385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:32.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:32.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4268" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":212,"skipped":3415,"failed":0} SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:32.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 15 00:56:32.738: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:41.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5280" for this suite. • [SLOW TEST:9.526 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:41.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:56:41.970: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 15 00:56:43.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-975 create -f -' May 15 00:56:47.740: INFO: stderr: "" May 15 00:56:47.740: INFO: stdout: "e2e-test-crd-publish-openapi-7932-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 15 00:56:47.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-975 delete e2e-test-crd-publish-openapi-7932-crds test-cr' May 15 00:56:47.872: INFO: stderr: "" May 15 00:56:47.872: INFO: stdout: "e2e-test-crd-publish-openapi-7932-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 15 00:56:47.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-975 apply -f -' May 15 00:56:48.127: INFO: stderr: "" May 15 00:56:48.127: INFO: stdout: "e2e-test-crd-publish-openapi-7932-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 15 00:56:48.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-975 delete e2e-test-crd-publish-openapi-7932-crds test-cr' May 15 00:56:48.250: INFO: stderr: "" May 15 00:56:48.250: INFO: stdout: "e2e-test-crd-publish-openapi-7932-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 15 00:56:48.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7932-crds' May 15 00:56:48.561: INFO: stderr: "" May 15 00:56:48.561: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7932-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:50.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-975" for this suite. • [SLOW TEST:8.568 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":214,"skipped":3444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:50.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:56:50.597: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315" in namespace "downward-api-2530" to be "Succeeded or Failed" May 15 00:56:50.615: INFO: Pod "downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315": Phase="Pending", Reason="", readiness=false. Elapsed: 17.664478ms May 15 00:56:52.618: INFO: Pod "downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021513928s May 15 00:56:54.640: INFO: Pod "downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043214078s STEP: Saw pod success May 15 00:56:54.640: INFO: Pod "downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315" satisfied condition "Succeeded or Failed" May 15 00:56:54.642: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315 container client-container: STEP: delete the pod May 15 00:56:54.835: INFO: Waiting for pod downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315 to disappear May 15 00:56:54.837: INFO: Pod downwardapi-volume-5b245329-8380-4654-9c8b-aef3f03a4315 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:54.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2530" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3479,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:54.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 00:56:54.922: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 00:56:54.931: INFO: Waiting for terminating namespaces to be deleted... May 15 00:56:54.932: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 15 00:56:54.959: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 15 00:56:54.959: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 15 00:56:54.959: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 15 00:56:54.959: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 15 00:56:54.959: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 15 00:56:54.959: INFO: Container kindnet-cni ready: true, restart count 0 May 15 00:56:54.959: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 15 00:56:54.959: INFO: Container kube-proxy ready: true, restart count 0 May 15 00:56:54.959: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 15 00:56:54.964: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 15 00:56:54.964: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 15 00:56:54.964: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 15 00:56:54.964: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 15 00:56:54.964: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 15 00:56:54.964: INFO: Container kindnet-cni ready: true, restart count 0 May 15 00:56:54.964: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 15 00:56:54.964: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f0da97944fca5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f0da97a688963], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:56.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6450" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":216,"skipped":3481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:56.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:56:56.095: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:56:56.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5403" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":217,"skipped":3543,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:56:56.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:56:57.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:56:59.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101017, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101017, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101018, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101017, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:57:02.884: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:57:02.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:04.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9613" for this suite. STEP: Destroying namespace "webhook-9613-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.348 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":218,"skipped":3543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:04.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:04.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4575" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":219,"skipped":3568,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:04.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 00:57:04.438: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 15 00:57:07.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-409 create -f -' May 15 00:57:11.508: INFO: stderr: "" May 15 00:57:11.508: INFO: stdout: "e2e-test-crd-publish-openapi-9943-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 15 00:57:11.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-409 delete e2e-test-crd-publish-openapi-9943-crds test-cr' May 15 00:57:11.619: INFO: stderr: "" May 15 00:57:11.619: INFO: stdout: "e2e-test-crd-publish-openapi-9943-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 15 00:57:11.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-409 apply -f -' May 15 00:57:11.960: INFO: stderr: "" May 15 00:57:11.960: INFO: stdout: "e2e-test-crd-publish-openapi-9943-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 15 00:57:11.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-409 delete e2e-test-crd-publish-openapi-9943-crds test-cr' May 15 00:57:12.087: INFO: stderr: "" May 15 00:57:12.087: INFO: stdout: "e2e-test-crd-publish-openapi-9943-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 15 00:57:12.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9943-crds' May 15 00:57:12.352: INFO: stderr: "" May 15 00:57:12.352: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9943-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:14.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-409" for this suite. • [SLOW TEST:9.910 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":220,"skipped":3568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:14.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:57:14.874: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:57:17.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101034, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101034, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101034, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:57:20.148: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:20.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2680" for this suite. STEP: Destroying namespace "webhook-2680-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.163 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":221,"skipped":3595,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:20.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:31.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2387" for this suite. • [SLOW TEST:11.246 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":222,"skipped":3617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:31.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 15 00:57:31.774: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:31.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9270" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":223,"skipped":3663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:31.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:57:32.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047" in namespace "projected-8852" to be "Succeeded or Failed" May 15 00:57:32.006: INFO: Pod "downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.750892ms May 15 00:57:34.010: INFO: Pod "downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006786984s May 15 00:57:36.014: INFO: Pod "downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010411655s STEP: Saw pod success May 15 00:57:36.014: INFO: Pod "downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047" satisfied condition "Succeeded or Failed" May 15 00:57:36.017: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047 container client-container: STEP: delete the pod May 15 00:57:36.083: INFO: Waiting for pod downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047 to disappear May 15 00:57:36.091: INFO: Pod downwardapi-volume-123f500a-9d18-4d7c-9d12-db4a626ca047 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:36.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8852" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":224,"skipped":3691,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:36.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:57:36.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504" in namespace "downward-api-9953" to be "Succeeded or Failed" May 15 00:57:36.371: INFO: Pod "downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504": Phase="Pending", Reason="", readiness=false. Elapsed: 199.633244ms May 15 00:57:38.376: INFO: Pod "downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20431058s May 15 00:57:40.380: INFO: Pod "downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.20886526s STEP: Saw pod success May 15 00:57:40.380: INFO: Pod "downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504" satisfied condition "Succeeded or Failed" May 15 00:57:40.384: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504 container client-container: STEP: delete the pod May 15 00:57:40.416: INFO: Waiting for pod downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504 to disappear May 15 00:57:40.442: INFO: Pod downwardapi-volume-1b3e1cf3-a323-4034-9125-8c2f6d6a1504 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:40.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9953" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3759,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:40.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:45.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9025" for this suite. • [SLOW TEST:5.405 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":226,"skipped":3773,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:45.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-24e65eeb-9164-4a9b-92ab-fe500fc308f9 STEP: Creating a pod to test consume secrets May 15 00:57:45.949: INFO: Waiting up to 5m0s for pod "pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd" in namespace "secrets-3963" to be "Succeeded or Failed" May 15 00:57:45.954: INFO: Pod "pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.681013ms May 15 00:57:48.179: INFO: Pod "pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230041156s May 15 00:57:50.184: INFO: Pod "pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.234348228s May 15 00:57:52.188: INFO: Pod "pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239156537s STEP: Saw pod success May 15 00:57:52.189: INFO: Pod "pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd" satisfied condition "Succeeded or Failed" May 15 00:57:52.192: INFO: Trying to get logs from node latest-worker pod pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd container secret-volume-test: STEP: delete the pod May 15 00:57:52.220: INFO: Waiting for pod pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd to disappear May 15 00:57:52.235: INFO: Pod pod-secrets-85c47df1-1094-47cb-8aa2-70ac58f755cd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:57:52.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3963" for this suite. • [SLOW TEST:6.385 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":227,"skipped":3795,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:57:52.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 15 00:58:00.441: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:00.540: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:02.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:02.546: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:04.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:04.546: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:06.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:06.546: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:08.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:08.545: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:10.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:10.545: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:12.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:12.545: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:14.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:14.545: INFO: Pod pod-with-poststart-exec-hook still exists May 15 00:58:16.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 00:58:16.545: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:58:16.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3646" for this suite. • [SLOW TEST:24.311 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3853,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:58:16.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3623 STEP: creating service affinity-nodeport in namespace services-3623 STEP: creating replication controller affinity-nodeport in namespace services-3623 I0515 00:58:16.714629 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-3623, replica count: 3 I0515 00:58:19.765012 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 00:58:22.765291 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 00:58:22.774: INFO: Creating new exec pod May 15 00:58:27.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3623 execpod-affinitybxfk7 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 15 00:58:28.121: INFO: stderr: "I0515 00:58:28.048336 3184 log.go:172] (0xc00003a420) (0xc000586d20) Create stream\nI0515 00:58:28.048387 3184 log.go:172] (0xc00003a420) (0xc000586d20) Stream added, broadcasting: 1\nI0515 00:58:28.050322 3184 log.go:172] (0xc00003a420) Reply frame received for 1\nI0515 00:58:28.050360 3184 log.go:172] (0xc00003a420) (0xc0001594a0) Create stream\nI0515 00:58:28.050375 3184 log.go:172] (0xc00003a420) (0xc0001594a0) Stream added, broadcasting: 3\nI0515 00:58:28.051381 3184 log.go:172] (0xc00003a420) Reply frame received for 3\nI0515 00:58:28.051407 3184 log.go:172] (0xc00003a420) (0xc000159c20) Create stream\nI0515 00:58:28.051414 3184 log.go:172] (0xc00003a420) (0xc000159c20) Stream added, broadcasting: 5\nI0515 00:58:28.052450 3184 log.go:172] (0xc00003a420) Reply frame received for 5\nI0515 00:58:28.111969 3184 log.go:172] (0xc00003a420) Data frame received for 5\nI0515 00:58:28.111994 3184 log.go:172] (0xc000159c20) (5) Data frame handling\nI0515 00:58:28.112007 3184 log.go:172] (0xc000159c20) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0515 00:58:28.112505 3184 log.go:172] (0xc00003a420) Data frame received for 5\nI0515 00:58:28.112532 3184 log.go:172] (0xc000159c20) (5) Data frame handling\nI0515 00:58:28.112560 3184 log.go:172] (0xc000159c20) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0515 00:58:28.112741 3184 log.go:172] (0xc00003a420) Data frame received for 5\nI0515 00:58:28.112769 3184 log.go:172] (0xc000159c20) (5) Data frame handling\nI0515 00:58:28.112930 3184 log.go:172] (0xc00003a420) Data frame received for 3\nI0515 00:58:28.112947 3184 log.go:172] (0xc0001594a0) (3) Data frame handling\nI0515 00:58:28.115368 3184 log.go:172] (0xc00003a420) Data frame received for 1\nI0515 00:58:28.115391 3184 log.go:172] (0xc000586d20) (1) Data frame handling\nI0515 00:58:28.115402 3184 log.go:172] (0xc000586d20) (1) Data frame sent\nI0515 00:58:28.115413 3184 log.go:172] (0xc00003a420) (0xc000586d20) Stream removed, broadcasting: 1\nI0515 00:58:28.115712 3184 log.go:172] (0xc00003a420) (0xc000586d20) Stream removed, broadcasting: 1\nI0515 00:58:28.115734 3184 log.go:172] (0xc00003a420) (0xc0001594a0) Stream removed, broadcasting: 3\nI0515 00:58:28.115744 3184 log.go:172] (0xc00003a420) (0xc000159c20) Stream removed, broadcasting: 5\n" May 15 00:58:28.121: INFO: stdout: "" May 15 00:58:28.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3623 execpod-affinitybxfk7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.217.209 80' May 15 00:58:28.324: INFO: stderr: "I0515 00:58:28.250452 3205 log.go:172] (0xc000ba7290) (0xc000a76140) Create stream\nI0515 00:58:28.250515 3205 log.go:172] (0xc000ba7290) (0xc000a76140) Stream added, broadcasting: 1\nI0515 00:58:28.255227 3205 log.go:172] (0xc000ba7290) Reply frame received for 1\nI0515 00:58:28.255267 3205 log.go:172] (0xc000ba7290) (0xc000735f40) Create stream\nI0515 00:58:28.255278 3205 log.go:172] (0xc000ba7290) (0xc000735f40) Stream added, broadcasting: 3\nI0515 00:58:28.256334 3205 log.go:172] (0xc000ba7290) Reply frame received for 3\nI0515 00:58:28.256391 3205 log.go:172] (0xc000ba7290) (0xc0006465a0) Create stream\nI0515 00:58:28.256421 3205 log.go:172] (0xc000ba7290) (0xc0006465a0) Stream added, broadcasting: 5\nI0515 00:58:28.257828 3205 log.go:172] (0xc000ba7290) Reply frame received for 5\nI0515 00:58:28.318185 3205 log.go:172] (0xc000ba7290) Data frame received for 5\nI0515 00:58:28.318224 3205 log.go:172] (0xc0006465a0) (5) Data frame handling\nI0515 00:58:28.318242 3205 log.go:172] (0xc0006465a0) (5) Data frame sent\nI0515 00:58:28.318255 3205 log.go:172] (0xc000ba7290) Data frame received for 5\nI0515 00:58:28.318266 3205 log.go:172] (0xc0006465a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.217.209 80\nConnection to 10.96.217.209 80 port [tcp/http] succeeded!\nI0515 00:58:28.318314 3205 log.go:172] (0xc000ba7290) Data frame received for 3\nI0515 00:58:28.318326 3205 log.go:172] (0xc000735f40) (3) Data frame handling\nI0515 00:58:28.319770 3205 log.go:172] (0xc000ba7290) Data frame received for 1\nI0515 00:58:28.319808 3205 log.go:172] (0xc000a76140) (1) Data frame handling\nI0515 00:58:28.319863 3205 log.go:172] (0xc000a76140) (1) Data frame sent\nI0515 00:58:28.319885 3205 log.go:172] (0xc000ba7290) (0xc000a76140) Stream removed, broadcasting: 1\nI0515 00:58:28.319902 3205 log.go:172] (0xc000ba7290) Go away received\nI0515 00:58:28.320457 3205 log.go:172] (0xc000ba7290) (0xc000a76140) Stream removed, broadcasting: 1\nI0515 00:58:28.320479 3205 log.go:172] (0xc000ba7290) (0xc000735f40) Stream removed, broadcasting: 3\nI0515 00:58:28.320489 3205 log.go:172] (0xc000ba7290) (0xc0006465a0) Stream removed, broadcasting: 5\n" May 15 00:58:28.325: INFO: stdout: "" May 15 00:58:28.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3623 execpod-affinitybxfk7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31499' May 15 00:58:28.550: INFO: stderr: "I0515 00:58:28.472434 3227 log.go:172] (0xc000d08e70) (0xc0000dd860) Create stream\nI0515 00:58:28.472487 3227 log.go:172] (0xc000d08e70) (0xc0000dd860) Stream added, broadcasting: 1\nI0515 00:58:28.475087 3227 log.go:172] (0xc000d08e70) Reply frame received for 1\nI0515 00:58:28.475122 3227 log.go:172] (0xc000d08e70) (0xc0003337c0) Create stream\nI0515 00:58:28.475132 3227 log.go:172] (0xc000d08e70) (0xc0003337c0) Stream added, broadcasting: 3\nI0515 00:58:28.475986 3227 log.go:172] (0xc000d08e70) Reply frame received for 3\nI0515 00:58:28.476017 3227 log.go:172] (0xc000d08e70) (0xc0004bc1e0) Create stream\nI0515 00:58:28.476026 3227 log.go:172] (0xc000d08e70) (0xc0004bc1e0) Stream added, broadcasting: 5\nI0515 00:58:28.476912 3227 log.go:172] (0xc000d08e70) Reply frame received for 5\nI0515 00:58:28.542354 3227 log.go:172] (0xc000d08e70) Data frame received for 5\nI0515 00:58:28.542401 3227 log.go:172] (0xc0004bc1e0) (5) Data frame handling\nI0515 00:58:28.542467 3227 log.go:172] (0xc0004bc1e0) (5) Data frame sent\nI0515 00:58:28.542514 3227 log.go:172] (0xc000d08e70) Data frame received for 5\nI0515 00:58:28.542534 3227 log.go:172] (0xc0004bc1e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31499\nConnection to 172.17.0.13 31499 port [tcp/31499] succeeded!\nI0515 00:58:28.542643 3227 log.go:172] (0xc0004bc1e0) (5) Data frame sent\nI0515 00:58:28.542688 3227 log.go:172] (0xc000d08e70) Data frame received for 5\nI0515 00:58:28.542705 3227 log.go:172] (0xc0004bc1e0) (5) Data frame handling\nI0515 00:58:28.542868 3227 log.go:172] (0xc000d08e70) Data frame received for 3\nI0515 00:58:28.542892 3227 log.go:172] (0xc0003337c0) (3) Data frame handling\nI0515 00:58:28.544951 3227 log.go:172] (0xc000d08e70) Data frame received for 1\nI0515 00:58:28.544972 3227 log.go:172] (0xc0000dd860) (1) Data frame handling\nI0515 00:58:28.544979 3227 log.go:172] (0xc0000dd860) (1) Data frame sent\nI0515 00:58:28.544988 3227 log.go:172] (0xc000d08e70) (0xc0000dd860) Stream removed, broadcasting: 1\nI0515 00:58:28.545017 3227 log.go:172] (0xc000d08e70) Go away received\nI0515 00:58:28.545428 3227 log.go:172] (0xc000d08e70) (0xc0000dd860) Stream removed, broadcasting: 1\nI0515 00:58:28.545449 3227 log.go:172] (0xc000d08e70) (0xc0003337c0) Stream removed, broadcasting: 3\nI0515 00:58:28.545456 3227 log.go:172] (0xc000d08e70) (0xc0004bc1e0) Stream removed, broadcasting: 5\n" May 15 00:58:28.550: INFO: stdout: "" May 15 00:58:28.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3623 execpod-affinitybxfk7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31499' May 15 00:58:28.763: INFO: stderr: "I0515 00:58:28.688952 3247 log.go:172] (0xc0009b0790) (0xc0006fcfa0) Create stream\nI0515 00:58:28.689020 3247 log.go:172] (0xc0009b0790) (0xc0006fcfa0) Stream added, broadcasting: 1\nI0515 00:58:28.693960 3247 log.go:172] (0xc0009b0790) Reply frame received for 1\nI0515 00:58:28.694023 3247 log.go:172] (0xc0009b0790) (0xc000520000) Create stream\nI0515 00:58:28.694046 3247 log.go:172] (0xc0009b0790) (0xc000520000) Stream added, broadcasting: 3\nI0515 00:58:28.695236 3247 log.go:172] (0xc0009b0790) Reply frame received for 3\nI0515 00:58:28.695302 3247 log.go:172] (0xc0009b0790) (0xc000520960) Create stream\nI0515 00:58:28.695322 3247 log.go:172] (0xc0009b0790) (0xc000520960) Stream added, broadcasting: 5\nI0515 00:58:28.696174 3247 log.go:172] (0xc0009b0790) Reply frame received for 5\nI0515 00:58:28.757290 3247 log.go:172] (0xc0009b0790) Data frame received for 5\nI0515 00:58:28.757314 3247 log.go:172] (0xc000520960) (5) Data frame handling\nI0515 00:58:28.757324 3247 log.go:172] (0xc000520960) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31499\nConnection to 172.17.0.12 31499 port [tcp/31499] succeeded!\nI0515 00:58:28.757664 3247 log.go:172] (0xc0009b0790) Data frame received for 3\nI0515 00:58:28.757677 3247 log.go:172] (0xc000520000) (3) Data frame handling\nI0515 00:58:28.757695 3247 log.go:172] (0xc0009b0790) Data frame received for 5\nI0515 00:58:28.757701 3247 log.go:172] (0xc000520960) (5) Data frame handling\nI0515 00:58:28.759090 3247 log.go:172] (0xc0009b0790) Data frame received for 1\nI0515 00:58:28.759109 3247 log.go:172] (0xc0006fcfa0) (1) Data frame handling\nI0515 00:58:28.759118 3247 log.go:172] (0xc0006fcfa0) (1) Data frame sent\nI0515 00:58:28.759129 3247 log.go:172] (0xc0009b0790) (0xc0006fcfa0) Stream removed, broadcasting: 1\nI0515 00:58:28.759214 3247 log.go:172] (0xc0009b0790) Go away received\nI0515 00:58:28.759399 3247 log.go:172] (0xc0009b0790) (0xc0006fcfa0) Stream removed, broadcasting: 1\nI0515 00:58:28.759413 3247 log.go:172] (0xc0009b0790) (0xc000520000) Stream removed, broadcasting: 3\nI0515 00:58:28.759420 3247 log.go:172] (0xc0009b0790) (0xc000520960) Stream removed, broadcasting: 5\n" May 15 00:58:28.763: INFO: stdout: "" May 15 00:58:28.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3623 execpod-affinitybxfk7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31499/ ; done' May 15 00:58:29.067: INFO: stderr: "I0515 00:58:28.898346 3268 log.go:172] (0xc0009fc0b0) (0xc0005de820) Create stream\nI0515 00:58:28.898386 3268 log.go:172] (0xc0009fc0b0) (0xc0005de820) Stream added, broadcasting: 1\nI0515 00:58:28.900438 3268 log.go:172] (0xc0009fc0b0) Reply frame received for 1\nI0515 00:58:28.900481 3268 log.go:172] (0xc0009fc0b0) (0xc00052e280) Create stream\nI0515 00:58:28.900496 3268 log.go:172] (0xc0009fc0b0) (0xc00052e280) Stream added, broadcasting: 3\nI0515 00:58:28.901879 3268 log.go:172] (0xc0009fc0b0) Reply frame received for 3\nI0515 00:58:28.901939 3268 log.go:172] (0xc0009fc0b0) (0xc00052f220) Create stream\nI0515 00:58:28.901972 3268 log.go:172] (0xc0009fc0b0) (0xc00052f220) Stream added, broadcasting: 5\nI0515 00:58:28.903057 3268 log.go:172] (0xc0009fc0b0) Reply frame received for 5\nI0515 00:58:28.973440 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:28.973493 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:28.973520 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:28.973549 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.973565 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.973587 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.981525 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.981550 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.981565 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.982470 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.982502 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.982515 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.982533 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:28.982543 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:28.982553 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:28.986314 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.986339 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.986353 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.986655 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:28.986680 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:28.986689 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:28.986699 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.986704 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.986710 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.992692 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.992724 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.992756 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.993026 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:28.993056 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:28.993092 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:28.993394 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.993418 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.993447 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.998613 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.998637 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.998664 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:28.999555 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:28.999570 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:28.999583 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:28.999618 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:28.999634 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:28.999663 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.003655 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.003665 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.003670 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.004147 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.004162 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.004182 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.004292 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.004312 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.004332 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.008378 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.008390 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.008397 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.009053 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.009075 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.009089 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.009102 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.009342 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.009363 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.013380 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.013415 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.013432 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.013689 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.013728 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.013753 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.013784 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.013797 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.013825 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.018046 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.018066 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.018077 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.018922 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.018965 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.018983 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.019006 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.019038 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.019061 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.022534 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.022564 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.022605 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.022966 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.022992 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.023018 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.023087 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.023111 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.023123 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.026750 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.026776 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.026797 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.027303 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.027336 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.027348 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.027360 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.027369 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.027376 3268 log.go:172] (0xc00052f220) (5) Data frame sent\nI0515 00:58:29.027385 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.027396 3268 log.go:172] (0xc00052f220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.027419 3268 log.go:172] (0xc00052f220) (5) Data frame sent\nI0515 00:58:29.031553 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.031579 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.031606 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.031925 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.031942 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.031956 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.032017 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.032034 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.032050 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.035961 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.035988 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.036016 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.036391 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.036415 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.036443 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.036465 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.036491 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.036508 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.040524 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.040556 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.040595 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.040830 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.040853 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.040865 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.040880 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.040891 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.040901 3268 log.go:172] (0xc00052f220) (5) Data frame sent\nI0515 00:58:29.040911 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0515 00:58:29.040937 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.040975 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n http://172.17.0.13:31499/\nI0515 00:58:29.045921 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.045958 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.046005 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.046939 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.046977 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.046991 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.047007 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.047016 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.047027 3268 log.go:172] (0xc00052f220) (5) Data frame sent\nI0515 00:58:29.047054 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.047068 3268 log.go:172] (0xc00052f220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.047088 3268 log.go:172] (0xc00052f220) (5) Data frame sent\nI0515 00:58:29.051941 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.052006 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.052048 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.052214 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.052251 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.052293 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.052313 3268 log.go:172] (0xc00052f220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31499/\nI0515 00:58:29.052331 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.052347 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.058545 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.058565 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.058584 3268 log.go:172] (0xc00052e280) (3) Data frame sent\nI0515 00:58:29.059458 3268 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0515 00:58:29.059588 3268 log.go:172] (0xc00052f220) (5) Data frame handling\nI0515 00:58:29.059634 3268 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0515 00:58:29.059664 3268 log.go:172] (0xc00052e280) (3) Data frame handling\nI0515 00:58:29.061458 3268 log.go:172] (0xc0009fc0b0) Data frame received for 1\nI0515 00:58:29.061506 3268 log.go:172] (0xc0005de820) (1) Data frame handling\nI0515 00:58:29.061525 3268 log.go:172] (0xc0005de820) (1) Data frame sent\nI0515 00:58:29.061541 3268 log.go:172] (0xc0009fc0b0) (0xc0005de820) Stream removed, broadcasting: 1\nI0515 00:58:29.061578 3268 log.go:172] (0xc0009fc0b0) Go away received\nI0515 00:58:29.062016 3268 log.go:172] (0xc0009fc0b0) (0xc0005de820) Stream removed, broadcasting: 1\nI0515 00:58:29.062044 3268 log.go:172] (0xc0009fc0b0) (0xc00052e280) Stream removed, broadcasting: 3\nI0515 00:58:29.062063 3268 log.go:172] (0xc0009fc0b0) (0xc00052f220) Stream removed, broadcasting: 5\n" May 15 00:58:29.068: INFO: stdout: "\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr\naffinity-nodeport-jwzfr" May 15 00:58:29.068: INFO: Received response from host: May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Received response from host: affinity-nodeport-jwzfr May 15 00:58:29.068: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-3623, will wait for the garbage collector to delete the pods May 15 00:58:29.258: INFO: Deleting ReplicationController affinity-nodeport took: 93.157714ms May 15 00:58:29.658: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.313517ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:58:45.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3623" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.895 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":229,"skipped":3856,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:58:45.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 15 00:58:45.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 15 00:58:45.769: INFO: stderr: "" May 15 00:58:45.769: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:58:45.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8199" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":230,"skipped":3858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:58:45.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0515 00:58:55.902097 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 00:58:55.902: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:58:55.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3031" for this suite. • [SLOW TEST:10.137 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":231,"skipped":3885,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:58:55.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-g458 STEP: Creating a pod to test atomic-volume-subpath May 15 00:58:56.030: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-g458" in namespace "subpath-9267" to be "Succeeded or Failed" May 15 00:58:56.033: INFO: Pod "pod-subpath-test-secret-g458": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341274ms May 15 00:58:58.037: INFO: Pod "pod-subpath-test-secret-g458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007526231s May 15 00:59:00.042: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 4.011561485s May 15 00:59:02.045: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 6.014543964s May 15 00:59:04.048: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 8.017893658s May 15 00:59:06.052: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 10.02224126s May 15 00:59:08.056: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 12.026080827s May 15 00:59:10.060: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 14.029672399s May 15 00:59:12.063: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 16.033178372s May 15 00:59:14.067: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 18.037435911s May 15 00:59:16.071: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 20.041068295s May 15 00:59:18.076: INFO: Pod "pod-subpath-test-secret-g458": Phase="Running", Reason="", readiness=true. Elapsed: 22.045886779s May 15 00:59:20.080: INFO: Pod "pod-subpath-test-secret-g458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050257317s STEP: Saw pod success May 15 00:59:20.080: INFO: Pod "pod-subpath-test-secret-g458" satisfied condition "Succeeded or Failed" May 15 00:59:20.084: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-g458 container test-container-subpath-secret-g458: STEP: delete the pod May 15 00:59:20.117: INFO: Waiting for pod pod-subpath-test-secret-g458 to disappear May 15 00:59:20.121: INFO: Pod pod-subpath-test-secret-g458 no longer exists STEP: Deleting pod pod-subpath-test-secret-g458 May 15 00:59:20.121: INFO: Deleting pod "pod-subpath-test-secret-g458" in namespace "subpath-9267" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:59:20.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9267" for this suite. • [SLOW TEST:24.214 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":232,"skipped":3894,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:59:20.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 15 00:59:27.641: INFO: 10 pods remaining May 15 00:59:27.641: INFO: 0 pods has nil DeletionTimestamp May 15 00:59:27.641: INFO: May 15 00:59:29.047: INFO: 0 pods remaining May 15 00:59:29.047: INFO: 0 pods has nil DeletionTimestamp May 15 00:59:29.047: INFO: STEP: Gathering metrics W0515 00:59:29.766659 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 00:59:29.766: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:59:29.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1655" for this suite. • [SLOW TEST:10.626 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":233,"skipped":3904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:59:30.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 15 00:59:31.645: INFO: Waiting up to 5m0s for pod "pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0" in namespace "emptydir-2799" to be "Succeeded or Failed" May 15 00:59:31.958: INFO: Pod "pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 313.340202ms May 15 00:59:33.963: INFO: Pod "pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317953579s May 15 00:59:36.126: INFO: Pod "pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.481635877s STEP: Saw pod success May 15 00:59:36.127: INFO: Pod "pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0" satisfied condition "Succeeded or Failed" May 15 00:59:36.129: INFO: Trying to get logs from node latest-worker pod pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0 container test-container: STEP: delete the pod May 15 00:59:36.359: INFO: Waiting for pod pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0 to disappear May 15 00:59:36.364: INFO: Pod pod-3e8f82f9-9e8c-40d8-8c2a-2ef856bcd0c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:59:36.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2799" for this suite. • [SLOW TEST:5.704 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":234,"skipped":3943,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:59:36.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 00:59:36.619: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6" in namespace "projected-1813" to be "Succeeded or Failed" May 15 00:59:36.634: INFO: Pod "downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.852988ms May 15 00:59:38.638: INFO: Pod "downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019458742s May 15 00:59:40.643: INFO: Pod "downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024418904s STEP: Saw pod success May 15 00:59:40.643: INFO: Pod "downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6" satisfied condition "Succeeded or Failed" May 15 00:59:40.646: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6 container client-container: STEP: delete the pod May 15 00:59:40.683: INFO: Waiting for pod downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6 to disappear May 15 00:59:40.690: INFO: Pod downwardapi-volume-7b66c6b4-b221-4199-87df-89eabda690f6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:59:40.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1813" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":3965,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:59:40.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 00:59:41.755: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 00:59:43.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101181, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101181, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101181, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101181, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 00:59:46.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:59:46.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3491" for this suite. STEP: Destroying namespace "webhook-3491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.423 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":236,"skipped":3980,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:59:47.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 15 00:59:47.251: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 00:59:55.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1261" for this suite. • [SLOW TEST:8.507 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":237,"skipped":3984,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 00:59:55.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 15 00:59:55.696: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:00:11.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-144" for this suite. • [SLOW TEST:15.621 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":238,"skipped":3990,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:00:11.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 15 01:00:11.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-4145 -- logs-generator --log-lines-total 100 --run-duration 20s' May 15 01:00:11.521: INFO: stderr: "" May 15 01:00:11.521: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 15 01:00:11.521: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 15 01:00:11.521: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4145" to be "running and ready, or succeeded" May 15 01:00:11.526: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.911852ms May 15 01:00:13.605: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083872329s May 15 01:00:15.610: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.088621383s May 15 01:00:15.610: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 15 01:00:15.610: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 15 01:00:15.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4145' May 15 01:00:15.744: INFO: stderr: "" May 15 01:00:15.744: INFO: stdout: "I0515 01:00:14.116726 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/g8mp 413\nI0515 01:00:14.316848 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/7ksw 348\nI0515 01:00:14.516882 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/pnmq 547\nI0515 01:00:14.716875 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/kck 595\nI0515 01:00:14.916876 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/4bg6 396\nI0515 01:00:15.116908 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/57m 321\nI0515 01:00:15.316942 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/f5h 233\nI0515 01:00:15.516911 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/tgkh 514\nI0515 01:00:15.716895 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/wprf 509\n" STEP: limiting log lines May 15 01:00:15.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4145 --tail=1' May 15 01:00:15.857: INFO: stderr: "" May 15 01:00:15.857: INFO: stdout: "I0515 01:00:15.716895 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/wprf 509\n" May 15 01:00:15.857: INFO: got output "I0515 01:00:15.716895 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/wprf 509\n" STEP: limiting log bytes May 15 01:00:15.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4145 --limit-bytes=1' May 15 01:00:15.975: INFO: stderr: "" May 15 01:00:15.975: INFO: stdout: "I" May 15 01:00:15.975: INFO: got output "I" STEP: exposing timestamps May 15 01:00:15.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4145 --tail=1 --timestamps' May 15 01:00:16.090: INFO: stderr: "" May 15 01:00:16.090: INFO: stdout: "2020-05-15T01:00:15.917063311Z I0515 01:00:15.916876 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/wtqm 534\n" May 15 01:00:16.090: INFO: got output "2020-05-15T01:00:15.917063311Z I0515 01:00:15.916876 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/wtqm 534\n" STEP: restricting to a time range May 15 01:00:18.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4145 --since=1s' May 15 01:00:18.709: INFO: stderr: "" May 15 01:00:18.709: INFO: stdout: "I0515 01:00:17.716932 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/979 594\nI0515 01:00:17.916920 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/6qjp 592\nI0515 01:00:18.116925 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/88x 589\nI0515 01:00:18.316972 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/dksh 393\nI0515 01:00:18.516922 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/jzt 344\n" May 15 01:00:18.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4145 --since=24h' May 15 01:00:18.835: INFO: stderr: "" May 15 01:00:18.835: INFO: stdout: "I0515 01:00:14.116726 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/g8mp 413\nI0515 01:00:14.316848 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/7ksw 348\nI0515 01:00:14.516882 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/pnmq 547\nI0515 01:00:14.716875 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/kck 595\nI0515 01:00:14.916876 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/4bg6 396\nI0515 01:00:15.116908 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/57m 321\nI0515 01:00:15.316942 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/f5h 233\nI0515 01:00:15.516911 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/tgkh 514\nI0515 01:00:15.716895 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/wprf 509\nI0515 01:00:15.916876 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/wtqm 534\nI0515 01:00:16.116918 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/gtms 503\nI0515 01:00:16.316890 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/rmp 485\nI0515 01:00:16.516924 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/dznr 453\nI0515 01:00:16.716886 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/r6tz 427\nI0515 01:00:16.916961 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/q7q 309\nI0515 01:00:17.116946 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/xzz 229\nI0515 01:00:17.316901 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/ggk4 259\nI0515 01:00:17.516905 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/cztz 542\nI0515 01:00:17.716932 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/979 594\nI0515 01:00:17.916920 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/6qjp 592\nI0515 01:00:18.116925 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/88x 589\nI0515 01:00:18.316972 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/dksh 393\nI0515 01:00:18.516922 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/jzt 344\nI0515 01:00:18.716915 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/fn5 564\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 15 01:00:18.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4145' May 15 01:00:21.661: INFO: stderr: "" May 15 01:00:21.661: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:00:21.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4145" for this suite. • [SLOW TEST:10.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":239,"skipped":4019,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:00:21.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 01:00:21.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4090' May 15 01:00:21.888: INFO: stderr: "" May 15 01:00:21.888: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 15 01:00:21.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4090' May 15 01:00:23.954: INFO: stderr: "" May 15 01:00:23.954: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:00:23.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4090" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":240,"skipped":4051,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:00:23.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:00:24.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9132" for this suite. STEP: Destroying namespace "nspatchtest-2cd31f45-326f-4d9d-bedd-c0466d7fbd19-8327" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":241,"skipped":4051,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:00:24.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-7673 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7673 STEP: Deleting pre-stop pod May 15 01:00:37.446: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:00:37.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7673" for this suite. • [SLOW TEST:13.212 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":242,"skipped":4057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:00:37.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:00:37.582: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 15 01:00:38.000: INFO: Pod name sample-pod: Found 0 pods out of 1 May 15 01:00:43.018: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 01:00:43.018: INFO: Creating deployment "test-rolling-update-deployment" May 15 01:00:43.023: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 15 01:00:43.037: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 15 01:00:46.199: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 15 01:00:46.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101243, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101243, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101243, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101243, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 01:00:48.206: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 15 01:00:48.289: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3325 /apis/apps/v1/namespaces/deployment-3325/deployments/test-rolling-update-deployment 20c89846-b94b-44d4-a55c-fbbffebd59bd 4691818 1 2020-05-15 01:00:43 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-15 01:00:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 01:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ad24c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-15 01:00:43 +0000 UTC,LastTransitionTime:2020-05-15 01:00:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-15 01:00:47 +0000 UTC,LastTransitionTime:2020-05-15 01:00:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 15 01:00:48.295: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-3325 /apis/apps/v1/namespaces/deployment-3325/replicasets/test-rolling-update-deployment-df7bb669b 47741f85-4872-488e-9de4-7544593e2f52 4691807 1 2020-05-15 01:00:43 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 20c89846-b94b-44d4-a55c-fbbffebd59bd 0xc003ad2d60 0xc003ad2d61}] [] [{kube-controller-manager Update apps/v1 2020-05-15 01:00:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20c89846-b94b-44d4-a55c-fbbffebd59bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ad2e88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 15 01:00:48.295: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 15 01:00:48.295: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3325 /apis/apps/v1/namespaces/deployment-3325/replicasets/test-rolling-update-controller 779640b5-4a73-42c3-8f5b-4cdff1f7879a 4691817 2 2020-05-15 01:00:37 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 20c89846-b94b-44d4-a55c-fbbffebd59bd 0xc003ad2ba7 0xc003ad2ba8}] [] [{e2e.test Update apps/v1 2020-05-15 01:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-15 01:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20c89846-b94b-44d4-a55c-fbbffebd59bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003ad2cb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 01:00:48.298: INFO: Pod "test-rolling-update-deployment-df7bb669b-cv4sm" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-cv4sm test-rolling-update-deployment-df7bb669b- deployment-3325 /api/v1/namespaces/deployment-3325/pods/test-rolling-update-deployment-df7bb669b-cv4sm 19a21e0c-c585-40f8-bbff-0415a7d831d3 4691806 0 2020-05-15 01:00:43 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 47741f85-4872-488e-9de4-7544593e2f52 0xc003b397b0 0xc003b397b1}] [] [{kube-controller-manager Update v1 2020-05-15 01:00:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47741f85-4872-488e-9de4-7544593e2f52\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-15 01:00:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lqlvc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lqlvc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lqlvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 01:00:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 01:00:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 01:00:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 01:00:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.227,StartTime:2020-05-15 01:00:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 01:00:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://59695b01b359b8ecd9b0349d1b3256d4e7ee2fe8ce5b0596c97a934f85de048c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:00:48.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3325" for this suite. • [SLOW TEST:10.807 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":243,"skipped":4087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:00:48.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 01:00:49.117: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 01:00:51.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 01:00:53.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101249, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 01:00:56.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:00:56.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5182-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:00:57.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1354" for this suite. STEP: Destroying namespace "webhook-1354-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.092 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":244,"skipped":4117,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:00:57.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 01:00:58.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 01:01:00.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101258, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101258, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101258, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101258, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 01:01:03.116: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:01:03.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-861" for this suite. STEP: Destroying namespace "webhook-861-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.883 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":245,"skipped":4125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:01:03.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:03:03.440: INFO: Deleting pod "var-expansion-69d279d7-2f1e-4fc8-8a59-3965d4746118" in namespace "var-expansion-9587" May 15 01:03:03.445: INFO: Wait up to 5m0s for pod "var-expansion-69d279d7-2f1e-4fc8-8a59-3965d4746118" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:03:07.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9587" for this suite. • [SLOW TEST:124.272 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":246,"skipped":4177,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:03:07.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-077af251-92c9-45f8-9f04-46123e5fa4ab in namespace container-probe-1353 May 15 01:03:11.675: INFO: Started pod liveness-077af251-92c9-45f8-9f04-46123e5fa4ab in namespace container-probe-1353 STEP: checking the pod's current state and verifying that restartCount is present May 15 01:03:11.702: INFO: Initial restart count of pod liveness-077af251-92c9-45f8-9f04-46123e5fa4ab is 0 May 15 01:03:31.785: INFO: Restart count of pod container-probe-1353/liveness-077af251-92c9-45f8-9f04-46123e5fa4ab is now 1 (20.083386992s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:03:31.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1353" for this suite. • [SLOW TEST:24.279 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":4193,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:03:31.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 15 01:03:31.936: INFO: Waiting up to 5m0s for pod "pod-42d63190-3067-4abe-a1fd-5516f9dd6963" in namespace "emptydir-7566" to be "Succeeded or Failed" May 15 01:03:31.940: INFO: Pod "pod-42d63190-3067-4abe-a1fd-5516f9dd6963": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128075ms May 15 01:03:33.959: INFO: Pod "pod-42d63190-3067-4abe-a1fd-5516f9dd6963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022722945s May 15 01:03:35.963: INFO: Pod "pod-42d63190-3067-4abe-a1fd-5516f9dd6963": Phase="Running", Reason="", readiness=true. Elapsed: 4.027096306s May 15 01:03:37.968: INFO: Pod "pod-42d63190-3067-4abe-a1fd-5516f9dd6963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031622055s STEP: Saw pod success May 15 01:03:37.968: INFO: Pod "pod-42d63190-3067-4abe-a1fd-5516f9dd6963" satisfied condition "Succeeded or Failed" May 15 01:03:37.971: INFO: Trying to get logs from node latest-worker pod pod-42d63190-3067-4abe-a1fd-5516f9dd6963 container test-container: STEP: delete the pod May 15 01:03:38.038: INFO: Waiting for pod pod-42d63190-3067-4abe-a1fd-5516f9dd6963 to disappear May 15 01:03:38.044: INFO: Pod pod-42d63190-3067-4abe-a1fd-5516f9dd6963 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:03:38.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7566" for this suite. • [SLOW TEST:6.217 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4197,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:03:38.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2119 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 15 01:03:38.159: INFO: Found 0 stateful pods, waiting for 3 May 15 01:03:48.164: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 01:03:48.165: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 01:03:48.165: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 15 01:03:58.165: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 01:03:58.165: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 01:03:58.165: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 15 01:03:58.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2119 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 01:03:58.473: INFO: stderr: "I0515 01:03:58.328540 3502 log.go:172] (0xc0005c8000) (0xc000632280) Create stream\nI0515 01:03:58.328602 3502 log.go:172] (0xc0005c8000) (0xc000632280) Stream added, broadcasting: 1\nI0515 01:03:58.331080 3502 log.go:172] (0xc0005c8000) Reply frame received for 1\nI0515 01:03:58.331126 3502 log.go:172] (0xc0005c8000) (0xc000632b40) Create stream\nI0515 01:03:58.331142 3502 log.go:172] (0xc0005c8000) (0xc000632b40) Stream added, broadcasting: 3\nI0515 01:03:58.332162 3502 log.go:172] (0xc0005c8000) Reply frame received for 3\nI0515 01:03:58.332212 3502 log.go:172] (0xc0005c8000) (0xc00063d400) Create stream\nI0515 01:03:58.332232 3502 log.go:172] (0xc0005c8000) (0xc00063d400) Stream added, broadcasting: 5\nI0515 01:03:58.333594 3502 log.go:172] (0xc0005c8000) Reply frame received for 5\nI0515 01:03:58.412606 3502 log.go:172] (0xc0005c8000) Data frame received for 5\nI0515 01:03:58.412629 3502 log.go:172] (0xc00063d400) (5) Data frame handling\nI0515 01:03:58.412640 3502 log.go:172] (0xc00063d400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 01:03:58.463470 3502 log.go:172] (0xc0005c8000) Data frame received for 3\nI0515 01:03:58.463518 3502 log.go:172] (0xc000632b40) (3) Data frame handling\nI0515 01:03:58.463557 3502 log.go:172] (0xc000632b40) (3) Data frame sent\nI0515 01:03:58.463808 3502 log.go:172] (0xc0005c8000) Data frame received for 5\nI0515 01:03:58.463848 3502 log.go:172] (0xc00063d400) (5) Data frame handling\nI0515 01:03:58.463872 3502 log.go:172] (0xc0005c8000) Data frame received for 3\nI0515 01:03:58.463885 3502 log.go:172] (0xc000632b40) (3) Data frame handling\nI0515 01:03:58.465957 3502 log.go:172] (0xc0005c8000) Data frame received for 1\nI0515 01:03:58.465980 3502 log.go:172] (0xc000632280) (1) Data frame handling\nI0515 01:03:58.466000 3502 log.go:172] (0xc000632280) (1) Data frame sent\nI0515 01:03:58.466027 3502 log.go:172] (0xc0005c8000) (0xc000632280) Stream removed, broadcasting: 1\nI0515 01:03:58.466050 3502 log.go:172] (0xc0005c8000) Go away received\nI0515 01:03:58.466533 3502 log.go:172] (0xc0005c8000) (0xc000632280) Stream removed, broadcasting: 1\nI0515 01:03:58.466560 3502 log.go:172] (0xc0005c8000) (0xc000632b40) Stream removed, broadcasting: 3\nI0515 01:03:58.466591 3502 log.go:172] (0xc0005c8000) (0xc00063d400) Stream removed, broadcasting: 5\n" May 15 01:03:58.473: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 01:03:58.473: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 15 01:04:08.505: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 15 01:04:18.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2119 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 01:04:18.874: INFO: stderr: "I0515 01:04:18.785257 3522 log.go:172] (0xc00003a420) (0xc0008e0000) Create stream\nI0515 01:04:18.785314 3522 log.go:172] (0xc00003a420) (0xc0008e0000) Stream added, broadcasting: 1\nI0515 01:04:18.787930 3522 log.go:172] (0xc00003a420) Reply frame received for 1\nI0515 01:04:18.787962 3522 log.go:172] (0xc00003a420) (0xc0008f19a0) Create stream\nI0515 01:04:18.787971 3522 log.go:172] (0xc00003a420) (0xc0008f19a0) Stream added, broadcasting: 3\nI0515 01:04:18.788821 3522 log.go:172] (0xc00003a420) Reply frame received for 3\nI0515 01:04:18.788852 3522 log.go:172] (0xc00003a420) (0xc0008e0780) Create stream\nI0515 01:04:18.788862 3522 log.go:172] (0xc00003a420) (0xc0008e0780) Stream added, broadcasting: 5\nI0515 01:04:18.789716 3522 log.go:172] (0xc00003a420) Reply frame received for 5\nI0515 01:04:18.866636 3522 log.go:172] (0xc00003a420) Data frame received for 5\nI0515 01:04:18.866677 3522 log.go:172] (0xc0008e0780) (5) Data frame handling\nI0515 01:04:18.866700 3522 log.go:172] (0xc0008e0780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 01:04:18.866716 3522 log.go:172] (0xc00003a420) Data frame received for 3\nI0515 01:04:18.866726 3522 log.go:172] (0xc0008f19a0) (3) Data frame handling\nI0515 01:04:18.866734 3522 log.go:172] (0xc0008f19a0) (3) Data frame sent\nI0515 01:04:18.866743 3522 log.go:172] (0xc00003a420) Data frame received for 3\nI0515 01:04:18.866750 3522 log.go:172] (0xc0008f19a0) (3) Data frame handling\nI0515 01:04:18.867084 3522 log.go:172] (0xc00003a420) Data frame received for 5\nI0515 01:04:18.867098 3522 log.go:172] (0xc0008e0780) (5) Data frame handling\nI0515 01:04:18.869584 3522 log.go:172] (0xc00003a420) Data frame received for 1\nI0515 01:04:18.869598 3522 log.go:172] (0xc0008e0000) (1) Data frame handling\nI0515 01:04:18.869606 3522 log.go:172] (0xc0008e0000) (1) Data frame sent\nI0515 01:04:18.869763 3522 log.go:172] (0xc00003a420) (0xc0008e0000) Stream removed, broadcasting: 1\nI0515 01:04:18.869816 3522 log.go:172] (0xc00003a420) Go away received\nI0515 01:04:18.870293 3522 log.go:172] (0xc00003a420) (0xc0008e0000) Stream removed, broadcasting: 1\nI0515 01:04:18.870317 3522 log.go:172] (0xc00003a420) (0xc0008f19a0) Stream removed, broadcasting: 3\nI0515 01:04:18.870329 3522 log.go:172] (0xc00003a420) (0xc0008e0780) Stream removed, broadcasting: 5\n" May 15 01:04:18.874: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 01:04:18.874: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 01:04:39.096: INFO: Waiting for StatefulSet statefulset-2119/ss2 to complete update May 15 01:04:39.096: INFO: Waiting for Pod statefulset-2119/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 15 01:04:49.104: INFO: Waiting for StatefulSet statefulset-2119/ss2 to complete update STEP: Rolling back to a previous revision May 15 01:04:59.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2119 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 01:04:59.357: INFO: stderr: "I0515 01:04:59.237497 3542 log.go:172] (0xc000b888f0) (0xc0002fef00) Create stream\nI0515 01:04:59.237559 3542 log.go:172] (0xc000b888f0) (0xc0002fef00) Stream added, broadcasting: 1\nI0515 01:04:59.239704 3542 log.go:172] (0xc000b888f0) Reply frame received for 1\nI0515 01:04:59.239750 3542 log.go:172] (0xc000b888f0) (0xc00024f040) Create stream\nI0515 01:04:59.239765 3542 log.go:172] (0xc000b888f0) (0xc00024f040) Stream added, broadcasting: 3\nI0515 01:04:59.240959 3542 log.go:172] (0xc000b888f0) Reply frame received for 3\nI0515 01:04:59.241008 3542 log.go:172] (0xc000b888f0) (0xc0002ff540) Create stream\nI0515 01:04:59.241047 3542 log.go:172] (0xc000b888f0) (0xc0002ff540) Stream added, broadcasting: 5\nI0515 01:04:59.242413 3542 log.go:172] (0xc000b888f0) Reply frame received for 5\nI0515 01:04:59.320417 3542 log.go:172] (0xc000b888f0) Data frame received for 5\nI0515 01:04:59.320444 3542 log.go:172] (0xc0002ff540) (5) Data frame handling\nI0515 01:04:59.320462 3542 log.go:172] (0xc0002ff540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 01:04:59.348686 3542 log.go:172] (0xc000b888f0) Data frame received for 3\nI0515 01:04:59.348721 3542 log.go:172] (0xc00024f040) (3) Data frame handling\nI0515 01:04:59.348742 3542 log.go:172] (0xc00024f040) (3) Data frame sent\nI0515 01:04:59.348771 3542 log.go:172] (0xc000b888f0) Data frame received for 5\nI0515 01:04:59.348797 3542 log.go:172] (0xc0002ff540) (5) Data frame handling\nI0515 01:04:59.349054 3542 log.go:172] (0xc000b888f0) Data frame received for 3\nI0515 01:04:59.349073 3542 log.go:172] (0xc00024f040) (3) Data frame handling\nI0515 01:04:59.350972 3542 log.go:172] (0xc000b888f0) Data frame received for 1\nI0515 01:04:59.350999 3542 log.go:172] (0xc0002fef00) (1) Data frame handling\nI0515 01:04:59.351017 3542 log.go:172] (0xc0002fef00) (1) Data frame sent\nI0515 01:04:59.351034 3542 log.go:172] (0xc000b888f0) (0xc0002fef00) Stream removed, broadcasting: 1\nI0515 01:04:59.351059 3542 log.go:172] (0xc000b888f0) Go away received\nI0515 01:04:59.351526 3542 log.go:172] (0xc000b888f0) (0xc0002fef00) Stream removed, broadcasting: 1\nI0515 01:04:59.351547 3542 log.go:172] (0xc000b888f0) (0xc00024f040) Stream removed, broadcasting: 3\nI0515 01:04:59.351558 3542 log.go:172] (0xc000b888f0) (0xc0002ff540) Stream removed, broadcasting: 5\n" May 15 01:04:59.357: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 01:04:59.357: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 01:05:09.389: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 15 01:05:19.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2119 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 01:05:19.710: INFO: stderr: "I0515 01:05:19.617961 3563 log.go:172] (0xc000905550) (0xc0006d46e0) Create stream\nI0515 01:05:19.618050 3563 log.go:172] (0xc000905550) (0xc0006d46e0) Stream added, broadcasting: 1\nI0515 01:05:19.624811 3563 log.go:172] (0xc000905550) Reply frame received for 1\nI0515 01:05:19.624868 3563 log.go:172] (0xc000905550) (0xc0004e57c0) Create stream\nI0515 01:05:19.624892 3563 log.go:172] (0xc000905550) (0xc0004e57c0) Stream added, broadcasting: 3\nI0515 01:05:19.627992 3563 log.go:172] (0xc000905550) Reply frame received for 3\nI0515 01:05:19.628020 3563 log.go:172] (0xc000905550) (0xc0006d5040) Create stream\nI0515 01:05:19.628031 3563 log.go:172] (0xc000905550) (0xc0006d5040) Stream added, broadcasting: 5\nI0515 01:05:19.628670 3563 log.go:172] (0xc000905550) Reply frame received for 5\nI0515 01:05:19.704781 3563 log.go:172] (0xc000905550) Data frame received for 5\nI0515 01:05:19.704807 3563 log.go:172] (0xc0006d5040) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 01:05:19.704832 3563 log.go:172] (0xc000905550) Data frame received for 3\nI0515 01:05:19.704866 3563 log.go:172] (0xc0004e57c0) (3) Data frame handling\nI0515 01:05:19.704888 3563 log.go:172] (0xc0004e57c0) (3) Data frame sent\nI0515 01:05:19.704930 3563 log.go:172] (0xc0006d5040) (5) Data frame sent\nI0515 01:05:19.704967 3563 log.go:172] (0xc000905550) Data frame received for 5\nI0515 01:05:19.704991 3563 log.go:172] (0xc0006d5040) (5) Data frame handling\nI0515 01:05:19.705406 3563 log.go:172] (0xc000905550) Data frame received for 3\nI0515 01:05:19.705423 3563 log.go:172] (0xc0004e57c0) (3) Data frame handling\nI0515 01:05:19.706190 3563 log.go:172] (0xc000905550) Data frame received for 1\nI0515 01:05:19.706204 3563 log.go:172] (0xc0006d46e0) (1) Data frame handling\nI0515 01:05:19.706217 3563 log.go:172] (0xc0006d46e0) (1) Data frame sent\nI0515 01:05:19.706226 3563 log.go:172] (0xc000905550) (0xc0006d46e0) Stream removed, broadcasting: 1\nI0515 01:05:19.706235 3563 log.go:172] (0xc000905550) Go away received\nI0515 01:05:19.706602 3563 log.go:172] (0xc000905550) (0xc0006d46e0) Stream removed, broadcasting: 1\nI0515 01:05:19.706615 3563 log.go:172] (0xc000905550) (0xc0004e57c0) Stream removed, broadcasting: 3\nI0515 01:05:19.706622 3563 log.go:172] (0xc000905550) (0xc0006d5040) Stream removed, broadcasting: 5\n" May 15 01:05:19.710: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 01:05:19.710: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 01:05:39.767: INFO: Waiting for StatefulSet statefulset-2119/ss2 to complete update May 15 01:05:39.767: INFO: Waiting for Pod statefulset-2119/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 15 01:05:49.778: INFO: Deleting all statefulset in ns statefulset-2119 May 15 01:05:49.781: INFO: Scaling statefulset ss2 to 0 May 15 01:06:09.814: INFO: Waiting for statefulset status.replicas updated to 0 May 15 01:06:09.816: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:06:09.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2119" for this suite. • [SLOW TEST:151.787 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":249,"skipped":4210,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:06:09.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 15 01:06:09.946: INFO: Waiting up to 5m0s for pod "downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a" in namespace "downward-api-9266" to be "Succeeded or Failed" May 15 01:06:09.968: INFO: Pod "downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.16945ms May 15 01:06:12.107: INFO: Pod "downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161226488s May 15 01:06:14.112: INFO: Pod "downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a": Phase="Running", Reason="", readiness=true. Elapsed: 4.165414928s May 15 01:06:16.116: INFO: Pod "downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169727368s STEP: Saw pod success May 15 01:06:16.116: INFO: Pod "downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a" satisfied condition "Succeeded or Failed" May 15 01:06:16.119: INFO: Trying to get logs from node latest-worker2 pod downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a container dapi-container: STEP: delete the pod May 15 01:06:16.158: INFO: Waiting for pod downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a to disappear May 15 01:06:16.223: INFO: Pod downward-api-0b39ca63-f40c-489d-bd0d-8dd7a0cff89a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:06:16.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9266" for this suite. • [SLOW TEST:6.391 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4225,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:06:16.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0515 01:06:57.674398 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 01:06:57.674: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:06:57.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4868" for this suite. • [SLOW TEST:41.448 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":251,"skipped":4243,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:06:57.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 15 01:06:57.931: INFO: Waiting up to 5m0s for pod "var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d" in namespace "var-expansion-5067" to be "Succeeded or Failed" May 15 01:06:57.935: INFO: Pod "var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.727043ms May 15 01:06:59.940: INFO: Pod "var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008829013s May 15 01:07:01.972: INFO: Pod "var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040908359s STEP: Saw pod success May 15 01:07:01.972: INFO: Pod "var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d" satisfied condition "Succeeded or Failed" May 15 01:07:01.975: INFO: Trying to get logs from node latest-worker2 pod var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d container dapi-container: STEP: delete the pod May 15 01:07:01.996: INFO: Waiting for pod var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d to disappear May 15 01:07:02.027: INFO: Pod var-expansion-cfc81d89-403d-4684-b565-7f2b6543c21d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:07:02.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5067" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":252,"skipped":4253,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:07:02.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 15 01:07:02.151: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:02.173: INFO: Number of nodes with available pods: 0 May 15 01:07:02.173: INFO: Node latest-worker is running more than one daemon pod May 15 01:07:03.260: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:03.271: INFO: Number of nodes with available pods: 0 May 15 01:07:03.271: INFO: Node latest-worker is running more than one daemon pod May 15 01:07:04.435: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:04.469: INFO: Number of nodes with available pods: 0 May 15 01:07:04.469: INFO: Node latest-worker is running more than one daemon pod May 15 01:07:05.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:05.463: INFO: Number of nodes with available pods: 0 May 15 01:07:05.463: INFO: Node latest-worker is running more than one daemon pod May 15 01:07:06.333: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:06.391: INFO: Number of nodes with available pods: 0 May 15 01:07:06.392: INFO: Node latest-worker is running more than one daemon pod May 15 01:07:07.417: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:07.421: INFO: Number of nodes with available pods: 0 May 15 01:07:07.421: INFO: Node latest-worker is running more than one daemon pod May 15 01:07:08.254: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:08.296: INFO: Number of nodes with available pods: 2 May 15 01:07:08.296: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 15 01:07:08.373: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:08.380: INFO: Number of nodes with available pods: 1 May 15 01:07:08.380: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:09.386: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:09.390: INFO: Number of nodes with available pods: 1 May 15 01:07:09.390: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:10.459: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:10.515: INFO: Number of nodes with available pods: 1 May 15 01:07:10.515: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:11.386: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:11.391: INFO: Number of nodes with available pods: 1 May 15 01:07:11.391: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:12.386: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:12.389: INFO: Number of nodes with available pods: 1 May 15 01:07:12.389: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:13.386: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:13.389: INFO: Number of nodes with available pods: 1 May 15 01:07:13.389: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:14.384: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:14.387: INFO: Number of nodes with available pods: 1 May 15 01:07:14.387: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:15.386: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:15.390: INFO: Number of nodes with available pods: 1 May 15 01:07:15.390: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:16.385: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:16.388: INFO: Number of nodes with available pods: 1 May 15 01:07:16.388: INFO: Node latest-worker2 is running more than one daemon pod May 15 01:07:17.387: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:07:17.397: INFO: Number of nodes with available pods: 2 May 15 01:07:17.397: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9755, will wait for the garbage collector to delete the pods May 15 01:07:17.458: INFO: Deleting DaemonSet.extensions daemon-set took: 6.218253ms May 15 01:07:17.858: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.34029ms May 15 01:07:24.962: INFO: Number of nodes with available pods: 0 May 15 01:07:24.962: INFO: Number of running nodes: 0, number of available pods: 0 May 15 01:07:24.965: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9755/daemonsets","resourceVersion":"4694231"},"items":null} May 15 01:07:24.967: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9755/pods","resourceVersion":"4694231"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:07:24.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9755" for this suite. • [SLOW TEST:22.947 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":253,"skipped":4253,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:07:24.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:07:25.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1732" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":254,"skipped":4286,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:07:25.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 15 01:07:25.279: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:07:32.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8110" for this suite. • [SLOW TEST:7.586 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":255,"skipped":4286,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:07:32.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 15 01:07:32.885: INFO: >>> kubeConfig: /root/.kube/config May 15 01:07:35.812: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:07:46.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4641" for this suite. • [SLOW TEST:13.772 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":256,"skipped":4294,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:07:46.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 01:07:54.790: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.794: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.797: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.799: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.806: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.808: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.811: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.813: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:54.818: INFO: Lookups using dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local] May 15 01:07:59.823: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.828: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.831: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.835: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.844: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.846: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.849: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.851: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:07:59.856: INFO: Lookups using dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local] May 15 01:08:04.822: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.825: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.828: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.830: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.844: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.846: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.847: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.849: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:04.857: INFO: Lookups using dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local] May 15 01:08:09.823: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.827: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.831: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.834: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.844: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.848: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.850: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.853: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:09.859: INFO: Lookups using dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local] May 15 01:08:14.822: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.825: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.828: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.830: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.838: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.840: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.842: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.844: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:14.850: INFO: Lookups using dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local] May 15 01:08:19.821: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.824: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.828: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.830: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.835: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.837: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.839: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.841: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local from pod dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494: the server could not find the requested resource (get pods dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494) May 15 01:08:19.861: INFO: Lookups using dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6244.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6244.svc.cluster.local jessie_udp@dns-test-service-2.dns-6244.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6244.svc.cluster.local] May 15 01:08:24.862: INFO: DNS probes using dns-6244/dns-test-38b9f19d-e1e7-4576-8f73-14c23bf77494 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:08:25.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6244" for this suite. • [SLOW TEST:38.951 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":257,"skipped":4302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:08:25.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9142/configmap-test-28fb0cd4-ff8c-4c14-b9e6-718cbb0d6548 STEP: Creating a pod to test consume configMaps May 15 01:08:25.705: INFO: Waiting up to 5m0s for pod "pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5" in namespace "configmap-9142" to be "Succeeded or Failed" May 15 01:08:25.716: INFO: Pod "pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.841797ms May 15 01:08:27.733: INFO: Pod "pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027622402s May 15 01:08:29.737: INFO: Pod "pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031832275s May 15 01:08:31.740: INFO: Pod "pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034966395s STEP: Saw pod success May 15 01:08:31.740: INFO: Pod "pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5" satisfied condition "Succeeded or Failed" May 15 01:08:31.771: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5 container env-test: STEP: delete the pod May 15 01:08:31.803: INFO: Waiting for pod pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5 to disappear May 15 01:08:31.811: INFO: Pod pod-configmaps-f45c091f-7a54-4847-9925-939842cb53f5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:08:31.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9142" for this suite. • [SLOW TEST:6.280 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4336,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:08:31.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-a028d6d3-eb6a-4e30-98ae-d757d2599cda STEP: Creating configMap with name cm-test-opt-upd-8954ced6-0ff8-498d-9d55-218c19e29d24 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a028d6d3-eb6a-4e30-98ae-d757d2599cda STEP: Updating configmap cm-test-opt-upd-8954ced6-0ff8-498d-9d55-218c19e29d24 STEP: Creating configMap with name cm-test-opt-create-f7f9e92f-75b5-456e-b608-6336771dee0e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:08:40.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-852" for this suite. • [SLOW TEST:8.228 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:08:40.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 15 01:08:44.698: INFO: Successfully updated pod "labelsupdate713806ca-343d-45fa-8fee-4a2788db74ca" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:08:48.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-779" for this suite. • [SLOW TEST:8.708 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":260,"skipped":4382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:08:48.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4716 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4716 STEP: Creating statefulset with conflicting port in namespace statefulset-4716 STEP: Waiting until pod test-pod will start running in namespace statefulset-4716 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4716 May 15 01:08:52.995: INFO: Observed stateful pod in namespace: statefulset-4716, name: ss-0, uid: 10368803-a732-4ed8-9d37-c5389a5591f5, status phase: Pending. Waiting for statefulset controller to delete. May 15 01:08:53.499: INFO: Observed stateful pod in namespace: statefulset-4716, name: ss-0, uid: 10368803-a732-4ed8-9d37-c5389a5591f5, status phase: Failed. Waiting for statefulset controller to delete. May 15 01:08:53.508: INFO: Observed stateful pod in namespace: statefulset-4716, name: ss-0, uid: 10368803-a732-4ed8-9d37-c5389a5591f5, status phase: Failed. Waiting for statefulset controller to delete. May 15 01:08:53.561: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4716 STEP: Removing pod with conflicting port in namespace statefulset-4716 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4716 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 15 01:08:59.659: INFO: Deleting all statefulset in ns statefulset-4716 May 15 01:08:59.662: INFO: Scaling statefulset ss to 0 May 15 01:09:09.724: INFO: Waiting for statefulset status.replicas updated to 0 May 15 01:09:09.726: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:09:09.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4716" for this suite. • [SLOW TEST:20.993 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":261,"skipped":4423,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:09:09.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-969 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-969 STEP: creating replication controller externalsvc in namespace services-969 I0515 01:09:10.083532 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-969, replica count: 2 I0515 01:09:13.133939 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 01:09:16.134171 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 15 01:09:16.235: INFO: Creating new exec pod May 15 01:09:20.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-969 execpod7cssh -- /bin/sh -x -c nslookup nodeport-service' May 15 01:09:23.483: INFO: stderr: "I0515 01:09:23.353989 3582 log.go:172] (0xc000928790) (0xc00062b040) Create stream\nI0515 01:09:23.354038 3582 log.go:172] (0xc000928790) (0xc00062b040) Stream added, broadcasting: 1\nI0515 01:09:23.356358 3582 log.go:172] (0xc000928790) Reply frame received for 1\nI0515 01:09:23.356392 3582 log.go:172] (0xc000928790) (0xc000602dc0) Create stream\nI0515 01:09:23.356401 3582 log.go:172] (0xc000928790) (0xc000602dc0) Stream added, broadcasting: 3\nI0515 01:09:23.357428 3582 log.go:172] (0xc000928790) Reply frame received for 3\nI0515 01:09:23.357498 3582 log.go:172] (0xc000928790) (0xc0005e2640) Create stream\nI0515 01:09:23.357511 3582 log.go:172] (0xc000928790) (0xc0005e2640) Stream added, broadcasting: 5\nI0515 01:09:23.358423 3582 log.go:172] (0xc000928790) Reply frame received for 5\nI0515 01:09:23.455238 3582 log.go:172] (0xc000928790) Data frame received for 5\nI0515 01:09:23.455266 3582 log.go:172] (0xc0005e2640) (5) Data frame handling\nI0515 01:09:23.455298 3582 log.go:172] (0xc0005e2640) (5) Data frame sent\n+ nslookup nodeport-service\nI0515 01:09:23.474085 3582 log.go:172] (0xc000928790) Data frame received for 3\nI0515 01:09:23.474112 3582 log.go:172] (0xc000602dc0) (3) Data frame handling\nI0515 01:09:23.474136 3582 log.go:172] (0xc000602dc0) (3) Data frame sent\nI0515 01:09:23.474913 3582 log.go:172] (0xc000928790) Data frame received for 3\nI0515 01:09:23.474928 3582 log.go:172] (0xc000602dc0) (3) Data frame handling\nI0515 01:09:23.474937 3582 log.go:172] (0xc000602dc0) (3) Data frame sent\nI0515 01:09:23.475440 3582 log.go:172] (0xc000928790) Data frame received for 3\nI0515 01:09:23.475468 3582 log.go:172] (0xc000602dc0) (3) Data frame handling\nI0515 01:09:23.475537 3582 log.go:172] (0xc000928790) Data frame received for 5\nI0515 01:09:23.475549 3582 log.go:172] (0xc0005e2640) (5) Data frame handling\nI0515 01:09:23.477742 3582 log.go:172] (0xc000928790) Data frame received for 1\nI0515 01:09:23.477770 3582 log.go:172] (0xc00062b040) (1) Data frame handling\nI0515 01:09:23.477791 3582 log.go:172] (0xc00062b040) (1) Data frame sent\nI0515 01:09:23.477817 3582 log.go:172] (0xc000928790) (0xc00062b040) Stream removed, broadcasting: 1\nI0515 01:09:23.477841 3582 log.go:172] (0xc000928790) Go away received\nI0515 01:09:23.478191 3582 log.go:172] (0xc000928790) (0xc00062b040) Stream removed, broadcasting: 1\nI0515 01:09:23.478213 3582 log.go:172] (0xc000928790) (0xc000602dc0) Stream removed, broadcasting: 3\nI0515 01:09:23.478224 3582 log.go:172] (0xc000928790) (0xc0005e2640) Stream removed, broadcasting: 5\n" May 15 01:09:23.483: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-969.svc.cluster.local\tcanonical name = externalsvc.services-969.svc.cluster.local.\nName:\texternalsvc.services-969.svc.cluster.local\nAddress: 10.111.156.89\n\n" STEP: deleting ReplicationController externalsvc in namespace services-969, will wait for the garbage collector to delete the pods May 15 01:09:23.542: INFO: Deleting ReplicationController externalsvc took: 5.715825ms May 15 01:09:23.642: INFO: Terminating ReplicationController externalsvc pods took: 100.157346ms May 15 01:09:29.617: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:09:29.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-969" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.075 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":262,"skipped":4449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:09:29.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 15 01:09:30.079: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e" in namespace "downward-api-680" to be "Succeeded or Failed" May 15 01:09:30.085: INFO: Pod "downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.982476ms May 15 01:09:32.097: INFO: Pod "downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018302471s May 15 01:09:34.110: INFO: Pod "downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031702417s STEP: Saw pod success May 15 01:09:34.110: INFO: Pod "downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e" satisfied condition "Succeeded or Failed" May 15 01:09:34.113: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e container client-container: STEP: delete the pod May 15 01:09:34.168: INFO: Waiting for pod downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e to disappear May 15 01:09:34.179: INFO: Pod downwardapi-volume-3167704a-53fe-4e13-a10c-4a1c4d99fd3e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:09:34.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-680" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4473,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:09:34.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 15 01:09:34.452: INFO: Waiting up to 5m0s for pod "client-containers-7fc46d74-1339-4add-bc93-a6cc636de137" in namespace "containers-5118" to be "Succeeded or Failed" May 15 01:09:34.454: INFO: Pod "client-containers-7fc46d74-1339-4add-bc93-a6cc636de137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615569ms May 15 01:09:36.757: INFO: Pod "client-containers-7fc46d74-1339-4add-bc93-a6cc636de137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305413169s May 15 01:09:38.961: INFO: Pod "client-containers-7fc46d74-1339-4add-bc93-a6cc636de137": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508838215s May 15 01:09:40.965: INFO: Pod "client-containers-7fc46d74-1339-4add-bc93-a6cc636de137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.512749982s STEP: Saw pod success May 15 01:09:40.965: INFO: Pod "client-containers-7fc46d74-1339-4add-bc93-a6cc636de137" satisfied condition "Succeeded or Failed" May 15 01:09:40.968: INFO: Trying to get logs from node latest-worker2 pod client-containers-7fc46d74-1339-4add-bc93-a6cc636de137 container test-container: STEP: delete the pod May 15 01:09:40.995: INFO: Waiting for pod client-containers-7fc46d74-1339-4add-bc93-a6cc636de137 to disappear May 15 01:09:41.030: INFO: Pod client-containers-7fc46d74-1339-4add-bc93-a6cc636de137 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:09:41.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5118" for this suite. • [SLOW TEST:6.852 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4473,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:09:41.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2089 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-2089 May 15 01:09:41.120: INFO: Found 0 stateful pods, waiting for 1 May 15 01:09:51.126: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 15 01:09:51.159: INFO: Deleting all statefulset in ns statefulset-2089 May 15 01:09:51.174: INFO: Scaling statefulset ss to 0 May 15 01:10:11.314: INFO: Waiting for statefulset status.replicas updated to 0 May 15 01:10:11.316: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:11.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2089" for this suite. • [SLOW TEST:30.509 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":265,"skipped":4473,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:11.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-9468 STEP: creating replication controller nodeport-test in namespace services-9468 I0515 01:10:11.883574 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9468, replica count: 2 I0515 01:10:14.933968 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 01:10:17.934205 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 01:10:17.934: INFO: Creating new exec pod May 15 01:10:22.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9468 execpod7f8tx -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 15 01:10:23.177: INFO: stderr: "I0515 01:10:23.100650 3616 log.go:172] (0xc000a76c60) (0xc0003dafa0) Create stream\nI0515 01:10:23.100689 3616 log.go:172] (0xc000a76c60) (0xc0003dafa0) Stream added, broadcasting: 1\nI0515 01:10:23.102836 3616 log.go:172] (0xc000a76c60) Reply frame received for 1\nI0515 01:10:23.102861 3616 log.go:172] (0xc000a76c60) (0xc0003db5e0) Create stream\nI0515 01:10:23.102867 3616 log.go:172] (0xc000a76c60) (0xc0003db5e0) Stream added, broadcasting: 3\nI0515 01:10:23.103667 3616 log.go:172] (0xc000a76c60) Reply frame received for 3\nI0515 01:10:23.103710 3616 log.go:172] (0xc000a76c60) (0xc000597180) Create stream\nI0515 01:10:23.103724 3616 log.go:172] (0xc000a76c60) (0xc000597180) Stream added, broadcasting: 5\nI0515 01:10:23.104518 3616 log.go:172] (0xc000a76c60) Reply frame received for 5\nI0515 01:10:23.171027 3616 log.go:172] (0xc000a76c60) Data frame received for 3\nI0515 01:10:23.171060 3616 log.go:172] (0xc0003db5e0) (3) Data frame handling\nI0515 01:10:23.171092 3616 log.go:172] (0xc000a76c60) Data frame received for 5\nI0515 01:10:23.171113 3616 log.go:172] (0xc000597180) (5) Data frame handling\nI0515 01:10:23.171131 3616 log.go:172] (0xc000597180) (5) Data frame sent\nI0515 01:10:23.171143 3616 log.go:172] (0xc000a76c60) Data frame received for 5\nI0515 01:10:23.171154 3616 log.go:172] (0xc000597180) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0515 01:10:23.172103 3616 log.go:172] (0xc000a76c60) Data frame received for 1\nI0515 01:10:23.172127 3616 log.go:172] (0xc0003dafa0) (1) Data frame handling\nI0515 01:10:23.172136 3616 log.go:172] (0xc0003dafa0) (1) Data frame sent\nI0515 01:10:23.172155 3616 log.go:172] (0xc000a76c60) (0xc0003dafa0) Stream removed, broadcasting: 1\nI0515 01:10:23.172165 3616 log.go:172] (0xc000a76c60) Go away received\nI0515 01:10:23.172394 3616 log.go:172] (0xc000a76c60) (0xc0003dafa0) Stream removed, broadcasting: 1\nI0515 01:10:23.172406 3616 log.go:172] (0xc000a76c60) (0xc0003db5e0) Stream removed, broadcasting: 3\nI0515 01:10:23.172411 3616 log.go:172] (0xc000a76c60) (0xc000597180) Stream removed, broadcasting: 5\n" May 15 01:10:23.177: INFO: stdout: "" May 15 01:10:23.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9468 execpod7f8tx -- /bin/sh -x -c nc -zv -t -w 2 10.108.57.52 80' May 15 01:10:23.421: INFO: stderr: "I0515 01:10:23.358699 3635 log.go:172] (0xc000953290) (0xc000ac43c0) Create stream\nI0515 01:10:23.358767 3635 log.go:172] (0xc000953290) (0xc000ac43c0) Stream added, broadcasting: 1\nI0515 01:10:23.362160 3635 log.go:172] (0xc000953290) Reply frame received for 1\nI0515 01:10:23.362195 3635 log.go:172] (0xc000953290) (0xc000556320) Create stream\nI0515 01:10:23.362206 3635 log.go:172] (0xc000953290) (0xc000556320) Stream added, broadcasting: 3\nI0515 01:10:23.362900 3635 log.go:172] (0xc000953290) Reply frame received for 3\nI0515 01:10:23.362924 3635 log.go:172] (0xc000953290) (0xc000490e60) Create stream\nI0515 01:10:23.362933 3635 log.go:172] (0xc000953290) (0xc000490e60) Stream added, broadcasting: 5\nI0515 01:10:23.363580 3635 log.go:172] (0xc000953290) Reply frame received for 5\nI0515 01:10:23.416027 3635 log.go:172] (0xc000953290) Data frame received for 5\nI0515 01:10:23.416059 3635 log.go:172] (0xc000490e60) (5) Data frame handling\nI0515 01:10:23.416083 3635 log.go:172] (0xc000490e60) (5) Data frame sent\nI0515 01:10:23.416100 3635 log.go:172] (0xc000953290) Data frame received for 5\nI0515 01:10:23.416109 3635 log.go:172] (0xc000490e60) (5) Data frame handling\nI0515 01:10:23.416124 3635 log.go:172] (0xc000953290) Data frame received for 3\nI0515 01:10:23.416147 3635 log.go:172] (0xc000556320) (3) Data frame handling\n+ nc -zv -t -w 2 10.108.57.52 80\nConnection to 10.108.57.52 80 port [tcp/http] succeeded!\nI0515 01:10:23.417791 3635 log.go:172] (0xc000953290) Data frame received for 1\nI0515 01:10:23.417823 3635 log.go:172] (0xc000ac43c0) (1) Data frame handling\nI0515 01:10:23.417856 3635 log.go:172] (0xc000ac43c0) (1) Data frame sent\nI0515 01:10:23.417874 3635 log.go:172] (0xc000953290) (0xc000ac43c0) Stream removed, broadcasting: 1\nI0515 01:10:23.417892 3635 log.go:172] (0xc000953290) Go away received\nI0515 01:10:23.418136 3635 log.go:172] (0xc000953290) (0xc000ac43c0) Stream removed, broadcasting: 1\nI0515 01:10:23.418147 3635 log.go:172] (0xc000953290) (0xc000556320) Stream removed, broadcasting: 3\nI0515 01:10:23.418152 3635 log.go:172] (0xc000953290) (0xc000490e60) Stream removed, broadcasting: 5\n" May 15 01:10:23.422: INFO: stdout: "" May 15 01:10:23.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9468 execpod7f8tx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30723' May 15 01:10:23.613: INFO: stderr: "I0515 01:10:23.544821 3655 log.go:172] (0xc000b11340) (0xc0000f2fa0) Create stream\nI0515 01:10:23.544867 3655 log.go:172] (0xc000b11340) (0xc0000f2fa0) Stream added, broadcasting: 1\nI0515 01:10:23.547453 3655 log.go:172] (0xc000b11340) Reply frame received for 1\nI0515 01:10:23.547496 3655 log.go:172] (0xc000b11340) (0xc00044c640) Create stream\nI0515 01:10:23.547521 3655 log.go:172] (0xc000b11340) (0xc00044c640) Stream added, broadcasting: 3\nI0515 01:10:23.548293 3655 log.go:172] (0xc000b11340) Reply frame received for 3\nI0515 01:10:23.548325 3655 log.go:172] (0xc000b11340) (0xc0007d43c0) Create stream\nI0515 01:10:23.548336 3655 log.go:172] (0xc000b11340) (0xc0007d43c0) Stream added, broadcasting: 5\nI0515 01:10:23.549361 3655 log.go:172] (0xc000b11340) Reply frame received for 5\nI0515 01:10:23.607064 3655 log.go:172] (0xc000b11340) Data frame received for 3\nI0515 01:10:23.607106 3655 log.go:172] (0xc00044c640) (3) Data frame handling\nI0515 01:10:23.607130 3655 log.go:172] (0xc000b11340) Data frame received for 5\nI0515 01:10:23.607141 3655 log.go:172] (0xc0007d43c0) (5) Data frame handling\nI0515 01:10:23.607169 3655 log.go:172] (0xc0007d43c0) (5) Data frame sent\nI0515 01:10:23.607193 3655 log.go:172] (0xc000b11340) Data frame received for 5\nI0515 01:10:23.607221 3655 log.go:172] (0xc0007d43c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30723\nConnection to 172.17.0.13 30723 port [tcp/30723] succeeded!\nI0515 01:10:23.608196 3655 log.go:172] (0xc000b11340) Data frame received for 1\nI0515 01:10:23.608216 3655 log.go:172] (0xc0000f2fa0) (1) Data frame handling\nI0515 01:10:23.608244 3655 log.go:172] (0xc0000f2fa0) (1) Data frame sent\nI0515 01:10:23.608357 3655 log.go:172] (0xc000b11340) (0xc0000f2fa0) Stream removed, broadcasting: 1\nI0515 01:10:23.608390 3655 log.go:172] (0xc000b11340) Go away received\nI0515 01:10:23.608672 3655 log.go:172] (0xc000b11340) (0xc0000f2fa0) Stream removed, broadcasting: 1\nI0515 01:10:23.608687 3655 log.go:172] (0xc000b11340) (0xc00044c640) Stream removed, broadcasting: 3\nI0515 01:10:23.608695 3655 log.go:172] (0xc000b11340) (0xc0007d43c0) Stream removed, broadcasting: 5\n" May 15 01:10:23.613: INFO: stdout: "" May 15 01:10:23.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9468 execpod7f8tx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30723' May 15 01:10:23.809: INFO: stderr: "I0515 01:10:23.743602 3675 log.go:172] (0xc0009bf080) (0xc00084a6e0) Create stream\nI0515 01:10:23.743656 3675 log.go:172] (0xc0009bf080) (0xc00084a6e0) Stream added, broadcasting: 1\nI0515 01:10:23.746044 3675 log.go:172] (0xc0009bf080) Reply frame received for 1\nI0515 01:10:23.746099 3675 log.go:172] (0xc0009bf080) (0xc000850000) Create stream\nI0515 01:10:23.746123 3675 log.go:172] (0xc0009bf080) (0xc000850000) Stream added, broadcasting: 3\nI0515 01:10:23.746902 3675 log.go:172] (0xc0009bf080) Reply frame received for 3\nI0515 01:10:23.746932 3675 log.go:172] (0xc0009bf080) (0xc000a70140) Create stream\nI0515 01:10:23.746946 3675 log.go:172] (0xc0009bf080) (0xc000a70140) Stream added, broadcasting: 5\nI0515 01:10:23.747645 3675 log.go:172] (0xc0009bf080) Reply frame received for 5\nI0515 01:10:23.800622 3675 log.go:172] (0xc0009bf080) Data frame received for 3\nI0515 01:10:23.800675 3675 log.go:172] (0xc000850000) (3) Data frame handling\nI0515 01:10:23.800711 3675 log.go:172] (0xc0009bf080) Data frame received for 5\nI0515 01:10:23.800742 3675 log.go:172] (0xc000a70140) (5) Data frame handling\nI0515 01:10:23.800801 3675 log.go:172] (0xc000a70140) (5) Data frame sent\nI0515 01:10:23.800836 3675 log.go:172] (0xc0009bf080) Data frame received for 5\nI0515 01:10:23.800872 3675 log.go:172] (0xc000a70140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30723\nConnection to 172.17.0.12 30723 port [tcp/30723] succeeded!\nI0515 01:10:23.802332 3675 log.go:172] (0xc0009bf080) Data frame received for 1\nI0515 01:10:23.802355 3675 log.go:172] (0xc00084a6e0) (1) Data frame handling\nI0515 01:10:23.802370 3675 log.go:172] (0xc00084a6e0) (1) Data frame sent\nI0515 01:10:23.802386 3675 log.go:172] (0xc0009bf080) (0xc00084a6e0) Stream removed, broadcasting: 1\nI0515 01:10:23.802434 3675 log.go:172] (0xc0009bf080) Go away received\nI0515 01:10:23.805608 3675 log.go:172] (0xc0009bf080) (0xc00084a6e0) Stream removed, broadcasting: 1\nI0515 01:10:23.805650 3675 log.go:172] (0xc0009bf080) (0xc000850000) Stream removed, broadcasting: 3\nI0515 01:10:23.805664 3675 log.go:172] (0xc0009bf080) (0xc000a70140) Stream removed, broadcasting: 5\n" May 15 01:10:23.809: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:23.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9468" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.267 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":266,"skipped":4489,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:23.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 15 01:10:23.944: INFO: Waiting up to 5m0s for pod "pod-b2ec5384-011a-4982-a728-42da8856990f" in namespace "emptydir-325" to be "Succeeded or Failed" May 15 01:10:23.979: INFO: Pod "pod-b2ec5384-011a-4982-a728-42da8856990f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.695562ms May 15 01:10:25.983: INFO: Pod "pod-b2ec5384-011a-4982-a728-42da8856990f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039243084s May 15 01:10:27.988: INFO: Pod "pod-b2ec5384-011a-4982-a728-42da8856990f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043428151s STEP: Saw pod success May 15 01:10:27.988: INFO: Pod "pod-b2ec5384-011a-4982-a728-42da8856990f" satisfied condition "Succeeded or Failed" May 15 01:10:27.991: INFO: Trying to get logs from node latest-worker pod pod-b2ec5384-011a-4982-a728-42da8856990f container test-container: STEP: delete the pod May 15 01:10:28.032: INFO: Waiting for pod pod-b2ec5384-011a-4982-a728-42da8856990f to disappear May 15 01:10:28.044: INFO: Pod pod-b2ec5384-011a-4982-a728-42da8856990f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:28.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-325" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4491,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:28.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 01:10:28.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 01:10:30.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 01:10:32.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101828, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 01:10:35.536: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:35.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3776" for this suite. STEP: Destroying namespace "webhook-3776-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.863 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":268,"skipped":4512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:35.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8ea862d1-19e4-450d-8840-1c5b6fcd8f6a STEP: Creating a pod to test consume secrets May 15 01:10:36.140: INFO: Waiting up to 5m0s for pod "pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d" in namespace "secrets-701" to be "Succeeded or Failed" May 15 01:10:36.242: INFO: Pod "pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.422374ms May 15 01:10:38.246: INFO: Pod "pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10604068s May 15 01:10:40.250: INFO: Pod "pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d": Phase="Running", Reason="", readiness=true. Elapsed: 4.109551564s May 15 01:10:42.255: INFO: Pod "pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114489005s STEP: Saw pod success May 15 01:10:42.255: INFO: Pod "pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d" satisfied condition "Succeeded or Failed" May 15 01:10:42.258: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d container secret-volume-test: STEP: delete the pod May 15 01:10:42.310: INFO: Waiting for pod pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d to disappear May 15 01:10:42.320: INFO: Pod pod-secrets-dee9320e-dcb3-43e2-9541-920fa336496d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:42.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-701" for this suite. • [SLOW TEST:6.412 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":269,"skipped":4536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:42.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 15 01:10:42.402: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1909 /api/v1/namespaces/watch-1909/configmaps/e2e-watch-test-resource-version ad8527e8-baa9-428b-a474-709953adab46 4695745 0 2020-05-15 01:10:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-15 01:10:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 15 01:10:42.402: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1909 /api/v1/namespaces/watch-1909/configmaps/e2e-watch-test-resource-version ad8527e8-baa9-428b-a474-709953adab46 4695746 0 2020-05-15 01:10:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-15 01:10:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:42.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1909" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":270,"skipped":4565,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:42.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 15 01:10:42.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7545' May 15 01:10:42.828: INFO: stderr: "" May 15 01:10:42.828: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 01:10:42.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7545' May 15 01:10:42.971: INFO: stderr: "" May 15 01:10:42.971: INFO: stdout: "update-demo-nautilus-bv659 update-demo-nautilus-cxgxf " May 15 01:10:42.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv659 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7545' May 15 01:10:43.070: INFO: stderr: "" May 15 01:10:43.070: INFO: stdout: "" May 15 01:10:43.070: INFO: update-demo-nautilus-bv659 is created but not running May 15 01:10:48.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7545' May 15 01:10:48.161: INFO: stderr: "" May 15 01:10:48.161: INFO: stdout: "update-demo-nautilus-bv659 update-demo-nautilus-cxgxf " May 15 01:10:48.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv659 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7545' May 15 01:10:48.259: INFO: stderr: "" May 15 01:10:48.259: INFO: stdout: "true" May 15 01:10:48.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv659 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7545' May 15 01:10:48.355: INFO: stderr: "" May 15 01:10:48.355: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 01:10:48.355: INFO: validating pod update-demo-nautilus-bv659 May 15 01:10:48.361: INFO: got data: { "image": "nautilus.jpg" } May 15 01:10:48.361: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 01:10:48.361: INFO: update-demo-nautilus-bv659 is verified up and running May 15 01:10:48.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxgxf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7545' May 15 01:10:48.459: INFO: stderr: "" May 15 01:10:48.459: INFO: stdout: "true" May 15 01:10:48.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cxgxf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7545' May 15 01:10:48.557: INFO: stderr: "" May 15 01:10:48.557: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 01:10:48.557: INFO: validating pod update-demo-nautilus-cxgxf May 15 01:10:48.561: INFO: got data: { "image": "nautilus.jpg" } May 15 01:10:48.561: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 01:10:48.561: INFO: update-demo-nautilus-cxgxf is verified up and running STEP: using delete to clean up resources May 15 01:10:48.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7545' May 15 01:10:48.670: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 01:10:48.670: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 15 01:10:48.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7545' May 15 01:10:48.946: INFO: stderr: "No resources found in kubectl-7545 namespace.\n" May 15 01:10:48.946: INFO: stdout: "" May 15 01:10:48.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7545 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 01:10:49.063: INFO: stderr: "" May 15 01:10:49.063: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:49.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7545" for this suite. • [SLOW TEST:6.643 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":271,"skipped":4569,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:49.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 15 01:10:49.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-557' May 15 01:10:49.984: INFO: stderr: "" May 15 01:10:49.984: INFO: stdout: "pod/pause created\n" May 15 01:10:49.984: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 15 01:10:49.984: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-557" to be "running and ready" May 15 01:10:49.997: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.915712ms May 15 01:10:52.139: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155006113s May 15 01:10:54.143: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.158951199s May 15 01:10:54.143: INFO: Pod "pause" satisfied condition "running and ready" May 15 01:10:54.143: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 15 01:10:54.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-557' May 15 01:10:54.273: INFO: stderr: "" May 15 01:10:54.273: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 15 01:10:54.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-557' May 15 01:10:54.383: INFO: stderr: "" May 15 01:10:54.383: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 15 01:10:54.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-557' May 15 01:10:54.505: INFO: stderr: "" May 15 01:10:54.505: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 15 01:10:54.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-557' May 15 01:10:54.611: INFO: stderr: "" May 15 01:10:54.611: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 15 01:10:54.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-557' May 15 01:10:54.747: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 01:10:54.747: INFO: stdout: "pod \"pause\" force deleted\n" May 15 01:10:54.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-557' May 15 01:10:54.844: INFO: stderr: "No resources found in kubectl-557 namespace.\n" May 15 01:10:54.844: INFO: stdout: "" May 15 01:10:54.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-557 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 01:10:54.928: INFO: stderr: "" May 15 01:10:54.928: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:54.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-557" for this suite. • [SLOW TEST:5.864 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":272,"skipped":4572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:54.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 15 01:10:55.232: INFO: Waiting up to 5m0s for pod "client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf" in namespace "containers-4092" to be "Succeeded or Failed" May 15 01:10:55.251: INFO: Pod "client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.209204ms May 15 01:10:57.255: INFO: Pod "client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023453534s May 15 01:10:59.259: INFO: Pod "client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027195457s STEP: Saw pod success May 15 01:10:59.259: INFO: Pod "client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf" satisfied condition "Succeeded or Failed" May 15 01:10:59.262: INFO: Trying to get logs from node latest-worker pod client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf container test-container: STEP: delete the pod May 15 01:10:59.341: INFO: Waiting for pod client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf to disappear May 15 01:10:59.349: INFO: Pod client-containers-c0e42786-51e8-4a09-8a7d-f40b68ad7faf no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:10:59.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4092" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4595,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:10:59.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:10:59.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9836' May 15 01:10:59.815: INFO: stderr: "" May 15 01:10:59.815: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 15 01:10:59.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9836' May 15 01:11:00.118: INFO: stderr: "" May 15 01:11:00.118: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 15 01:11:01.237: INFO: Selector matched 1 pods for map[app:agnhost] May 15 01:11:01.237: INFO: Found 0 / 1 May 15 01:11:02.120: INFO: Selector matched 1 pods for map[app:agnhost] May 15 01:11:02.120: INFO: Found 0 / 1 May 15 01:11:03.121: INFO: Selector matched 1 pods for map[app:agnhost] May 15 01:11:03.121: INFO: Found 1 / 1 May 15 01:11:03.121: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 15 01:11:03.124: INFO: Selector matched 1 pods for map[app:agnhost] May 15 01:11:03.124: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 01:11:03.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-rjkvf --namespace=kubectl-9836' May 15 01:11:03.235: INFO: stderr: "" May 15 01:11:03.235: INFO: stdout: "Name: agnhost-master-rjkvf\nNamespace: kubectl-9836\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Fri, 15 May 2020 01:10:59 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.4\nIPs:\n IP: 10.244.1.4\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://05ec81993f36458b9b120006b42f9f2b30c23c293334b30d149006cd8ef13bdd\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 15 May 2020 01:11:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-m46n9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-m46n9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-m46n9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-9836/agnhost-master-rjkvf to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" May 15 01:11:03.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9836' May 15 01:11:03.354: INFO: stderr: "" May 15 01:11:03.354: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9836\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-rjkvf\n" May 15 01:11:03.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9836' May 15 01:11:03.463: INFO: stderr: "" May 15 01:11:03.463: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9836\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.108.213.154\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.4:6379\nSession Affinity: None\nEvents: \n" May 15 01:11:03.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 15 01:11:03.592: INFO: stderr: "" May 15 01:11:03.592: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Fri, 15 May 2020 01:10:58 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 15 May 2020 01:10:52 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 15 May 2020 01:10:52 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 15 May 2020 01:10:52 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 15 May 2020 01:10:52 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 15d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 15d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 15d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 15d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 15 01:11:03.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-9836' May 15 01:11:03.727: INFO: stderr: "" May 15 01:11:03.727: INFO: stdout: "Name: kubectl-9836\nLabels: e2e-framework=kubectl\n e2e-run=d3ecd145-5de8-4bc3-91a4-a686f614c9c3\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:11:03.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9836" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":274,"skipped":4596,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:11:03.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 15 01:11:03.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 15 01:11:03.887: INFO: stderr: "" May 15 01:11:03.888: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:11:03.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2767" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":275,"skipped":4599,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:11:03.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:11:03.971: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 15 01:11:06.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 create -f -' May 15 01:11:11.438: INFO: stderr: "" May 15 01:11:11.438: INFO: stdout: "e2e-test-crd-publish-openapi-1325-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 15 01:11:11.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 delete e2e-test-crd-publish-openapi-1325-crds test-foo' May 15 01:11:11.568: INFO: stderr: "" May 15 01:11:11.568: INFO: stdout: "e2e-test-crd-publish-openapi-1325-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 15 01:11:11.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 apply -f -' May 15 01:11:11.887: INFO: stderr: "" May 15 01:11:11.887: INFO: stdout: "e2e-test-crd-publish-openapi-1325-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 15 01:11:11.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 delete e2e-test-crd-publish-openapi-1325-crds test-foo' May 15 01:11:11.992: INFO: stderr: "" May 15 01:11:11.992: INFO: stdout: "e2e-test-crd-publish-openapi-1325-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 15 01:11:11.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 create -f -' May 15 01:11:12.247: INFO: rc: 1 May 15 01:11:12.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 apply -f -' May 15 01:11:12.489: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 15 01:11:12.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 create -f -' May 15 01:11:12.742: INFO: rc: 1 May 15 01:11:12.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 apply -f -' May 15 01:11:12.990: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 15 01:11:12.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1325-crds' May 15 01:11:13.226: INFO: stderr: "" May 15 01:11:13.226: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1325-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 15 01:11:13.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1325-crds.metadata' May 15 01:11:13.507: INFO: stderr: "" May 15 01:11:13.507: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1325-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 15 01:11:13.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1325-crds.spec' May 15 01:11:13.752: INFO: stderr: "" May 15 01:11:13.752: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1325-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 15 01:11:13.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1325-crds.spec.bars' May 15 01:11:14.034: INFO: stderr: "" May 15 01:11:14.034: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1325-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 15 01:11:14.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1325-crds.spec.bars2' May 15 01:11:14.277: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:11:17.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7917" for this suite. • [SLOW TEST:13.339 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":276,"skipped":4604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:11:17.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 01:11:18.261: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 01:11:20.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101878, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101878, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101878, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725101878, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 01:11:23.328: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:11:23.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6434-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:11:24.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7869" for this suite. STEP: Destroying namespace "webhook-7869-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.820 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":277,"skipped":4630,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:11:25.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-1f7f66f8-446c-44ee-894d-063e5c4e557c in namespace container-probe-8125 May 15 01:11:29.416: INFO: Started pod busybox-1f7f66f8-446c-44ee-894d-063e5c4e557c in namespace container-probe-8125 STEP: checking the pod's current state and verifying that restartCount is present May 15 01:11:29.418: INFO: Initial restart count of pod busybox-1f7f66f8-446c-44ee-894d-063e5c4e557c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:15:30.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8125" for this suite. • [SLOW TEST:245.140 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4640,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:15:30.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0515 01:15:42.855936 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 01:15:42.855: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:15:42.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7242" for this suite. • [SLOW TEST:13.128 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":279,"skipped":4650,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:15:43.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:15:43.405: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:15:43.415: INFO: Waiting for terminating namespaces to be deleted... May 15 01:15:43.417: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 15 01:15:43.422: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 15 01:15:43.422: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 15 01:15:43.422: INFO: simpletest-rc-to-be-deleted-2sxwd from gc-7242 started at 2020-05-15 01:15:30 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container nginx ready: true, restart count 0 May 15 01:15:43.422: INFO: simpletest-rc-to-be-deleted-5hqhr from gc-7242 started at 2020-05-15 01:15:30 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container nginx ready: false, restart count 0 May 15 01:15:43.422: INFO: simpletest-rc-to-be-deleted-8cwkd from gc-7242 started at 2020-05-15 01:15:30 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container nginx ready: true, restart count 0 May 15 01:15:43.422: INFO: simpletest-rc-to-be-deleted-jlj69 from gc-7242 started at 2020-05-15 01:15:30 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container nginx ready: false, restart count 0 May 15 01:15:43.422: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container kindnet-cni ready: true, restart count 0 May 15 01:15:43.422: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 15 01:15:43.422: INFO: Container kube-proxy ready: true, restart count 0 May 15 01:15:43.422: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 15 01:15:43.450: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 15 01:15:43.450: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 15 01:15:43.450: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 15 01:15:43.450: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 15 01:15:43.450: INFO: simpletest-rc-to-be-deleted-9nnjg from gc-7242 started at 2020-05-15 01:15:30 +0000 UTC (1 container statuses recorded) May 15 01:15:43.450: INFO: Container nginx ready: true, restart count 0 May 15 01:15:43.450: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 15 01:15:43.450: INFO: Container kindnet-cni ready: true, restart count 0 May 15 01:15:43.450: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 15 01:15:43.450: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-836b912e-fb46-4efd-906f-dbc9565159f1 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-836b912e-fb46-4efd-906f-dbc9565159f1 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-836b912e-fb46-4efd-906f-dbc9565159f1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:16:02.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7496" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:19.136 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":280,"skipped":4657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:16:02.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:16:15.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5860" for this suite. • [SLOW TEST:13.296 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":281,"skipped":4687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:16:15.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:16:15.868: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 15 01:16:15.903: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:15.937: INFO: Number of nodes with available pods: 0 May 15 01:16:15.937: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:16.957: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:17.004: INFO: Number of nodes with available pods: 0 May 15 01:16:17.005: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:17.942: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:17.945: INFO: Number of nodes with available pods: 0 May 15 01:16:17.945: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:19.018: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:19.081: INFO: Number of nodes with available pods: 0 May 15 01:16:19.081: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:19.943: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:19.946: INFO: Number of nodes with available pods: 0 May 15 01:16:19.946: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:20.998: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:21.001: INFO: Number of nodes with available pods: 2 May 15 01:16:21.001: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 15 01:16:21.123: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:21.123: INFO: Wrong image for pod: daemon-set-tn4k5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:21.186: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:22.192: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:22.192: INFO: Wrong image for pod: daemon-set-tn4k5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:22.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:23.263: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:23.263: INFO: Wrong image for pod: daemon-set-tn4k5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:23.267: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:24.191: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:24.191: INFO: Wrong image for pod: daemon-set-tn4k5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:24.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:25.197: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:25.197: INFO: Wrong image for pod: daemon-set-tn4k5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:25.197: INFO: Pod daemon-set-tn4k5 is not available May 15 01:16:25.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:26.190: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:26.190: INFO: Pod daemon-set-zqshv is not available May 15 01:16:26.194: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:27.191: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:27.191: INFO: Pod daemon-set-zqshv is not available May 15 01:16:27.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:28.226: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:28.227: INFO: Pod daemon-set-zqshv is not available May 15 01:16:28.230: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:29.293: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:29.298: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:30.191: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:30.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:31.190: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:31.190: INFO: Pod daemon-set-dqb4n is not available May 15 01:16:31.194: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:32.190: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:32.190: INFO: Pod daemon-set-dqb4n is not available May 15 01:16:32.193: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:33.190: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:33.191: INFO: Pod daemon-set-dqb4n is not available May 15 01:16:33.194: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:34.191: INFO: Wrong image for pod: daemon-set-dqb4n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 15 01:16:34.191: INFO: Pod daemon-set-dqb4n is not available May 15 01:16:34.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:35.192: INFO: Pod daemon-set-db8bv is not available May 15 01:16:35.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 15 01:16:35.202: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:35.207: INFO: Number of nodes with available pods: 1 May 15 01:16:35.207: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:36.211: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:36.214: INFO: Number of nodes with available pods: 1 May 15 01:16:36.214: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:37.214: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:37.311: INFO: Number of nodes with available pods: 1 May 15 01:16:37.311: INFO: Node latest-worker is running more than one daemon pod May 15 01:16:38.227: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 01:16:38.231: INFO: Number of nodes with available pods: 2 May 15 01:16:38.231: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-368, will wait for the garbage collector to delete the pods May 15 01:16:38.304: INFO: Deleting DaemonSet.extensions daemon-set took: 6.413987ms May 15 01:16:38.605: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.565887ms May 15 01:16:45.352: INFO: Number of nodes with available pods: 0 May 15 01:16:45.352: INFO: Number of running nodes: 0, number of available pods: 0 May 15 01:16:45.354: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-368/daemonsets","resourceVersion":"4697712"},"items":null} May 15 01:16:45.356: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-368/pods","resourceVersion":"4697712"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:16:45.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-368" for this suite. • [SLOW TEST:29.626 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":282,"skipped":4726,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:16:45.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 15 01:16:45.534: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6" in namespace "security-context-test-8604" to be "Succeeded or Failed" May 15 01:16:45.537: INFO: Pod "busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617031ms May 15 01:16:47.592: INFO: Pod "busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058060874s May 15 01:16:49.596: INFO: Pod "busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062543392s May 15 01:16:49.596: INFO: Pod "busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6" satisfied condition "Succeeded or Failed" May 15 01:16:49.618: INFO: Got logs for pod "busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:16:49.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8604" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":283,"skipped":4744,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:16:49.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:16:49.705: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:16:49.728: INFO: Waiting for terminating namespaces to be deleted... May 15 01:16:49.731: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 15 01:16:49.735: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 15 01:16:49.735: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 15 01:16:49.735: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 15 01:16:49.735: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 15 01:16:49.735: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 15 01:16:49.735: INFO: Container kindnet-cni ready: true, restart count 0 May 15 01:16:49.735: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 15 01:16:49.735: INFO: Container kube-proxy ready: true, restart count 0 May 15 01:16:49.735: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 15 01:16:49.739: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 15 01:16:49.739: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 15 01:16:49.739: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 15 01:16:49.739: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 15 01:16:49.739: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 15 01:16:49.739: INFO: Container kindnet-cni ready: true, restart count 0 May 15 01:16:49.739: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 15 01:16:49.739: INFO: Container kube-proxy ready: true, restart count 0 May 15 01:16:49.739: INFO: busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6 from security-context-test-8604 started at 2020-05-15 01:16:45 +0000 UTC (1 container statuses recorded) May 15 01:16:49.739: INFO: Container busybox-privileged-false-7ca4b63b-03eb-447c-9a98-29438f41c9b6 ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 15 01:16:49.876: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 15 01:16:49.876: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 15 01:16:49.876: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 15 01:16:49.876: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 15 01:16:49.876: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 15 01:16:49.876: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 15 01:16:49.876: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 15 01:16:49.882: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9096467c-0d3d-4cdd-b844-812d2424f3b0.160f0ebfae17635b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-143/filler-pod-9096467c-0d3d-4cdd-b844-812d2424f3b0 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9096467c-0d3d-4cdd-b844-812d2424f3b0.160f0ec034559411], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9096467c-0d3d-4cdd-b844-812d2424f3b0.160f0ec080aded82], Reason = [Created], Message = [Created container filler-pod-9096467c-0d3d-4cdd-b844-812d2424f3b0] STEP: Considering event: Type = [Normal], Name = [filler-pod-9096467c-0d3d-4cdd-b844-812d2424f3b0.160f0ec08fb7e14b], Reason = [Started], Message = [Started container filler-pod-9096467c-0d3d-4cdd-b844-812d2424f3b0] STEP: Considering event: Type = [Normal], Name = [filler-pod-9dad15e6-c5e3-4577-9fa8-c292c71eb166.160f0ebfaed86eaa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-143/filler-pod-9dad15e6-c5e3-4577-9fa8-c292c71eb166 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9dad15e6-c5e3-4577-9fa8-c292c71eb166.160f0ebffdae7174], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9dad15e6-c5e3-4577-9fa8-c292c71eb166.160f0ec05d8e8442], Reason = [Created], Message = [Created container filler-pod-9dad15e6-c5e3-4577-9fa8-c292c71eb166] STEP: Considering event: Type = [Normal], Name = [filler-pod-9dad15e6-c5e3-4577-9fa8-c292c71eb166.160f0ec07a6b36b6], Reason = [Started], Message = [Started container filler-pod-9dad15e6-c5e3-4577-9fa8-c292c71eb166] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f0ec1181b38aa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f0ec11c62ba20], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:16:57.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-143" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.562 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":284,"skipped":4746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:16:57.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2718 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 01:16:57.611: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 15 01:16:57.734: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 15 01:16:59.807: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 15 01:17:01.741: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 01:17:03.738: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 01:17:05.739: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 01:17:07.737: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 01:17:09.747: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 01:17:11.738: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 01:17:13.738: INFO: The status of Pod netserver-0 is Running (Ready = false) May 15 01:17:15.738: INFO: The status of Pod netserver-0 is Running (Ready = true) May 15 01:17:15.744: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 15 01:17:21.796: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.13 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2718 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 01:17:21.796: INFO: >>> kubeConfig: /root/.kube/config I0515 01:17:21.824328 7 log.go:172] (0xc002638000) (0xc001172960) Create stream I0515 01:17:21.824380 7 log.go:172] (0xc002638000) (0xc001172960) Stream added, broadcasting: 1 I0515 01:17:21.825863 7 log.go:172] (0xc002638000) Reply frame received for 1 I0515 01:17:21.825897 7 log.go:172] (0xc002638000) (0xc000dbe0a0) Create stream I0515 01:17:21.825911 7 log.go:172] (0xc002638000) (0xc000dbe0a0) Stream added, broadcasting: 3 I0515 01:17:21.826647 7 log.go:172] (0xc002638000) Reply frame received for 3 I0515 01:17:21.826668 7 log.go:172] (0xc002638000) (0xc001172be0) Create stream I0515 01:17:21.826676 7 log.go:172] (0xc002638000) (0xc001172be0) Stream added, broadcasting: 5 I0515 01:17:21.827271 7 log.go:172] (0xc002638000) Reply frame received for 5 I0515 01:17:22.993776 7 log.go:172] (0xc002638000) Data frame received for 3 I0515 01:17:22.993821 7 log.go:172] (0xc000dbe0a0) (3) Data frame handling I0515 01:17:22.993859 7 log.go:172] (0xc000dbe0a0) (3) Data frame sent I0515 01:17:22.994172 7 log.go:172] (0xc002638000) Data frame received for 3 I0515 01:17:22.994200 7 log.go:172] (0xc000dbe0a0) (3) Data frame handling I0515 01:17:22.994235 7 log.go:172] (0xc002638000) Data frame received for 5 I0515 01:17:22.994264 7 log.go:172] (0xc001172be0) (5) Data frame handling I0515 01:17:22.996233 7 log.go:172] (0xc002638000) Data frame received for 1 I0515 01:17:22.996275 7 log.go:172] (0xc001172960) (1) Data frame handling I0515 01:17:22.996300 7 log.go:172] (0xc001172960) (1) Data frame sent I0515 01:17:22.996319 7 log.go:172] (0xc002638000) (0xc001172960) Stream removed, broadcasting: 1 I0515 01:17:22.996478 7 log.go:172] (0xc002638000) (0xc001172960) Stream removed, broadcasting: 1 I0515 01:17:22.996508 7 log.go:172] (0xc002638000) (0xc000dbe0a0) Stream removed, broadcasting: 3 I0515 01:17:22.996532 7 log.go:172] (0xc002638000) Go away received I0515 01:17:22.996572 7 log.go:172] (0xc002638000) (0xc001172be0) Stream removed, broadcasting: 5 May 15 01:17:22.996: INFO: Found all expected endpoints: [netserver-0] May 15 01:17:23.023: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.53 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2718 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 01:17:23.023: INFO: >>> kubeConfig: /root/.kube/config I0515 01:17:23.062850 7 log.go:172] (0xc00282bef0) (0xc001173720) Create stream I0515 01:17:23.062900 7 log.go:172] (0xc00282bef0) (0xc001173720) Stream added, broadcasting: 1 I0515 01:17:23.064915 7 log.go:172] (0xc00282bef0) Reply frame received for 1 I0515 01:17:23.064977 7 log.go:172] (0xc00282bef0) (0xc00103a000) Create stream I0515 01:17:23.064998 7 log.go:172] (0xc00282bef0) (0xc00103a000) Stream added, broadcasting: 3 I0515 01:17:23.066152 7 log.go:172] (0xc00282bef0) Reply frame received for 3 I0515 01:17:23.066192 7 log.go:172] (0xc00282bef0) (0xc000a440a0) Create stream I0515 01:17:23.066213 7 log.go:172] (0xc00282bef0) (0xc000a440a0) Stream added, broadcasting: 5 I0515 01:17:23.067129 7 log.go:172] (0xc00282bef0) Reply frame received for 5 I0515 01:17:24.146735 7 log.go:172] (0xc00282bef0) Data frame received for 3 I0515 01:17:24.146780 7 log.go:172] (0xc00103a000) (3) Data frame handling I0515 01:17:24.146819 7 log.go:172] (0xc00103a000) (3) Data frame sent I0515 01:17:24.147108 7 log.go:172] (0xc00282bef0) Data frame received for 5 I0515 01:17:24.147125 7 log.go:172] (0xc000a440a0) (5) Data frame handling I0515 01:17:24.147421 7 log.go:172] (0xc00282bef0) Data frame received for 3 I0515 01:17:24.147445 7 log.go:172] (0xc00103a000) (3) Data frame handling I0515 01:17:24.150411 7 log.go:172] (0xc00282bef0) Data frame received for 1 I0515 01:17:24.150494 7 log.go:172] (0xc001173720) (1) Data frame handling I0515 01:17:24.150525 7 log.go:172] (0xc001173720) (1) Data frame sent I0515 01:17:24.150541 7 log.go:172] (0xc00282bef0) (0xc001173720) Stream removed, broadcasting: 1 I0515 01:17:24.150568 7 log.go:172] (0xc00282bef0) Go away received I0515 01:17:24.150708 7 log.go:172] (0xc00282bef0) (0xc001173720) Stream removed, broadcasting: 1 I0515 01:17:24.150731 7 log.go:172] (0xc00282bef0) (0xc00103a000) Stream removed, broadcasting: 3 I0515 01:17:24.150741 7 log.go:172] (0xc00282bef0) (0xc000a440a0) Stream removed, broadcasting: 5 May 15 01:17:24.150: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:17:24.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2718" for this suite. • [SLOW TEST:26.970 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":285,"skipped":4781,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:17:24.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:17:31.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7037" for this suite. • [SLOW TEST:7.152 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":286,"skipped":4783,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:17:31.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 15 01:17:31.855: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:17:48.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7623" for this suite. • [SLOW TEST:17.458 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":287,"skipped":4784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:17:48.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 01:17:49.427: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 01:17:51.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 01:17:53.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725102269, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 01:17:56.514: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 15 01:18:00.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-215 to-be-attached-pod -i -c=container1' May 15 01:18:00.682: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:18:00.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-215" for this suite. STEP: Destroying namespace "webhook-215-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.144 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":288,"skipped":4807,"failed":0} May 15 01:18:00.913: INFO: Running AfterSuite actions on all nodes May 15 01:18:00.913: INFO: Running AfterSuite actions on node 1 May 15 01:18:00.913: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5910.652 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS