I0510 23:49:36.677815 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0510 23:49:36.678036 7 e2e.go:129] Starting e2e run "c8dce0b2-f676-4ad6-a374-f2233401cc47" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589154575 - Will randomize all specs Will run 288 of 5095 specs May 10 23:49:36.743: INFO: >>> kubeConfig: /root/.kube/config May 10 23:49:36.749: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 10 23:49:36.774: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 10 23:49:36.811: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 10 23:49:36.811: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 10 23:49:36.811: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 10 23:49:36.819: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 10 23:49:36.819: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 10 23:49:36.819: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 10 23:49:36.820: INFO: kube-apiserver version: v1.18.2 May 10 23:49:36.820: INFO: >>> kubeConfig: /root/.kube/config May 10 23:49:36.827: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:49:36.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 10 23:49:36.940: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-5410bded-5832-4ed5-802d-088156120ccb STEP: Creating secret with name secret-projected-all-test-volume-0f571988-ffce-4efb-89b7-12c9c2b95639 STEP: Creating a pod to test Check all projections for projected volume plugin May 10 23:49:36.979: INFO: Waiting up to 5m0s for pod "projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f" in namespace "projected-6518" to be "Succeeded or Failed" May 10 23:49:36.982: INFO: Pod "projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.908434ms May 10 23:49:38.987: INFO: Pod "projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007971669s May 10 23:49:40.991: INFO: Pod "projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011945593s STEP: Saw pod success May 10 23:49:40.991: INFO: Pod "projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f" satisfied condition "Succeeded or Failed" May 10 23:49:40.994: INFO: Trying to get logs from node latest-worker pod projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f container projected-all-volume-test: STEP: delete the pod May 10 23:49:41.043: INFO: Waiting for pod projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f to disappear May 10 23:49:41.055: INFO: Pod projected-volume-156d4f7c-8071-4028-b3fe-51f46380a78f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:49:41.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6518" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":1,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:49:41.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 10 23:49:44.255: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:49:44.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5136" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:49:44.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-c9e2034b-1db9-4a08-9523-f14120f72933 STEP: Creating configMap with name cm-test-opt-upd-94c7b799-957b-481a-8849-244c3069cd57 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c9e2034b-1db9-4a08-9523-f14120f72933 STEP: Updating configmap cm-test-opt-upd-94c7b799-957b-481a-8849-244c3069cd57 STEP: Creating configMap with name cm-test-opt-create-3d3e2b11-2511-48d4-b0ba-f261be46f364 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:49:54.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4805" for this suite. • [SLOW TEST:10.436 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":3,"skipped":88,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:49:54.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5361 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5361 I0510 23:49:55.080242 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5361, replica count: 2 I0510 23:49:58.130699 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 23:50:01.130974 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 23:50:01.131: INFO: Creating new exec pod May 10 23:50:06.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5361 execpodwj4j8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 10 23:50:09.058: INFO: stderr: "I0510 23:50:08.951132 28 log.go:172] (0xc000bca000) (0xc0008426e0) Create stream\nI0510 23:50:08.951200 28 log.go:172] (0xc000bca000) (0xc0008426e0) Stream added, broadcasting: 1\nI0510 23:50:08.953619 28 log.go:172] (0xc000bca000) Reply frame received for 1\nI0510 23:50:08.953651 28 log.go:172] (0xc000bca000) (0xc000843040) Create stream\nI0510 23:50:08.953659 28 log.go:172] (0xc000bca000) (0xc000843040) Stream added, broadcasting: 3\nI0510 23:50:08.954853 28 log.go:172] (0xc000bca000) Reply frame received for 3\nI0510 23:50:08.954889 28 log.go:172] (0xc000bca000) (0xc000838b40) Create stream\nI0510 23:50:08.954901 28 log.go:172] (0xc000bca000) (0xc000838b40) Stream added, broadcasting: 5\nI0510 23:50:08.955893 28 log.go:172] (0xc000bca000) Reply frame received for 5\nI0510 23:50:09.048362 28 log.go:172] (0xc000bca000) Data frame received for 5\nI0510 23:50:09.048397 28 log.go:172] (0xc000838b40) (5) Data frame handling\nI0510 23:50:09.048418 28 log.go:172] (0xc000838b40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0510 23:50:09.049600 28 log.go:172] (0xc000bca000) Data frame received for 5\nI0510 23:50:09.049638 28 log.go:172] (0xc000838b40) (5) Data frame handling\nI0510 23:50:09.049676 28 log.go:172] (0xc000838b40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0510 23:50:09.050080 28 log.go:172] (0xc000bca000) Data frame received for 3\nI0510 23:50:09.050100 28 log.go:172] (0xc000843040) (3) Data frame handling\nI0510 23:50:09.050156 28 log.go:172] (0xc000bca000) Data frame received for 5\nI0510 23:50:09.050189 28 log.go:172] (0xc000838b40) (5) Data frame handling\nI0510 23:50:09.051921 28 log.go:172] (0xc000bca000) Data frame received for 1\nI0510 23:50:09.051953 28 log.go:172] (0xc0008426e0) (1) Data frame handling\nI0510 23:50:09.051974 28 log.go:172] (0xc0008426e0) (1) Data frame sent\nI0510 23:50:09.052001 28 log.go:172] (0xc000bca000) (0xc0008426e0) Stream removed, broadcasting: 1\nI0510 23:50:09.052032 28 log.go:172] (0xc000bca000) Go away received\nI0510 23:50:09.052534 28 log.go:172] (0xc000bca000) (0xc0008426e0) Stream removed, broadcasting: 1\nI0510 23:50:09.052558 28 log.go:172] (0xc000bca000) (0xc000843040) Stream removed, broadcasting: 3\nI0510 23:50:09.052570 28 log.go:172] (0xc000bca000) (0xc000838b40) Stream removed, broadcasting: 5\n" May 10 23:50:09.058: INFO: stdout: "" May 10 23:50:09.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5361 execpodwj4j8 -- /bin/sh -x -c nc -zv -t -w 2 10.107.41.225 80' May 10 23:50:09.275: INFO: stderr: "I0510 23:50:09.197616 54 log.go:172] (0xc0009e9340) (0xc0009c85a0) Create stream\nI0510 23:50:09.197673 54 log.go:172] (0xc0009e9340) (0xc0009c85a0) Stream added, broadcasting: 1\nI0510 23:50:09.202319 54 log.go:172] (0xc0009e9340) Reply frame received for 1\nI0510 23:50:09.202387 54 log.go:172] (0xc0009e9340) (0xc000722d20) Create stream\nI0510 23:50:09.202404 54 log.go:172] (0xc0009e9340) (0xc000722d20) Stream added, broadcasting: 3\nI0510 23:50:09.203304 54 log.go:172] (0xc0009e9340) Reply frame received for 3\nI0510 23:50:09.203325 54 log.go:172] (0xc0009e9340) (0xc00058a280) Create stream\nI0510 23:50:09.203333 54 log.go:172] (0xc0009e9340) (0xc00058a280) Stream added, broadcasting: 5\nI0510 23:50:09.204139 54 log.go:172] (0xc0009e9340) Reply frame received for 5\nI0510 23:50:09.268592 54 log.go:172] (0xc0009e9340) Data frame received for 3\nI0510 23:50:09.268616 54 log.go:172] (0xc000722d20) (3) Data frame handling\nI0510 23:50:09.268630 54 log.go:172] (0xc0009e9340) Data frame received for 5\nI0510 23:50:09.268636 54 log.go:172] (0xc00058a280) (5) Data frame handling\nI0510 23:50:09.268648 54 log.go:172] (0xc00058a280) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.41.225 80\nConnection to 10.107.41.225 80 port [tcp/http] succeeded!\nI0510 23:50:09.268662 54 log.go:172] (0xc0009e9340) Data frame received for 5\nI0510 23:50:09.268712 54 log.go:172] (0xc00058a280) (5) Data frame handling\nI0510 23:50:09.270285 54 log.go:172] (0xc0009e9340) Data frame received for 1\nI0510 23:50:09.270301 54 log.go:172] (0xc0009c85a0) (1) Data frame handling\nI0510 23:50:09.270318 54 log.go:172] (0xc0009c85a0) (1) Data frame sent\nI0510 23:50:09.270331 54 log.go:172] (0xc0009e9340) (0xc0009c85a0) Stream removed, broadcasting: 1\nI0510 23:50:09.270515 54 log.go:172] (0xc0009e9340) Go away received\nI0510 23:50:09.270607 54 log.go:172] (0xc0009e9340) (0xc0009c85a0) Stream removed, broadcasting: 1\nI0510 23:50:09.270625 54 log.go:172] (0xc0009e9340) (0xc000722d20) Stream removed, broadcasting: 3\nI0510 23:50:09.270633 54 log.go:172] (0xc0009e9340) (0xc00058a280) Stream removed, broadcasting: 5\n" May 10 23:50:09.275: INFO: stdout: "" May 10 23:50:09.275: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:09.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5361" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.502 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":4,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:09.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:09.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9834" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":5,"skipped":121,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:09.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-8a661862-f33b-4570-97b1-65c16a13e23c STEP: Creating a pod to test consume configMaps May 10 23:50:09.597: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a" in namespace "projected-4548" to be "Succeeded or Failed" May 10 23:50:09.601: INFO: Pod "pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172451ms May 10 23:50:11.606: INFO: Pod "pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009065061s May 10 23:50:13.780: INFO: Pod "pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183044853s STEP: Saw pod success May 10 23:50:13.780: INFO: Pod "pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a" satisfied condition "Succeeded or Failed" May 10 23:50:13.790: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a container projected-configmap-volume-test: STEP: delete the pod May 10 23:50:13.808: INFO: Waiting for pod pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a to disappear May 10 23:50:13.812: INFO: Pod pod-projected-configmaps-6eab1897-a73a-4d17-991c-d8a0b902e95a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:13.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4548" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":6,"skipped":134,"failed":0} SSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:13.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:13.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7356" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":7,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:13.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-8520 STEP: creating replication controller nodeport-test in namespace services-8520 I0510 23:50:14.135773 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8520, replica count: 2 I0510 23:50:17.186151 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 23:50:20.186392 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 23:50:20.186: INFO: Creating new exec pod May 10 23:50:25.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8520 execpodjp74w -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 10 23:50:25.461: INFO: stderr: "I0510 23:50:25.384332 75 log.go:172] (0xc000510d10) (0xc00041cd20) Create stream\nI0510 23:50:25.384385 75 log.go:172] (0xc000510d10) (0xc00041cd20) Stream added, broadcasting: 1\nI0510 23:50:25.386936 75 log.go:172] (0xc000510d10) Reply frame received for 1\nI0510 23:50:25.386982 75 log.go:172] (0xc000510d10) (0xc000764640) Create stream\nI0510 23:50:25.386992 75 log.go:172] (0xc000510d10) (0xc000764640) Stream added, broadcasting: 3\nI0510 23:50:25.387757 75 log.go:172] (0xc000510d10) Reply frame received for 3\nI0510 23:50:25.387781 75 log.go:172] (0xc000510d10) (0xc000764be0) Create stream\nI0510 23:50:25.387789 75 log.go:172] (0xc000510d10) (0xc000764be0) Stream added, broadcasting: 5\nI0510 23:50:25.388530 75 log.go:172] (0xc000510d10) Reply frame received for 5\nI0510 23:50:25.450853 75 log.go:172] (0xc000510d10) Data frame received for 5\nI0510 23:50:25.450900 75 log.go:172] (0xc000764be0) (5) Data frame handling\nI0510 23:50:25.450933 75 log.go:172] (0xc000764be0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0510 23:50:25.451053 75 log.go:172] (0xc000510d10) Data frame received for 5\nI0510 23:50:25.451082 75 log.go:172] (0xc000764be0) (5) Data frame handling\nI0510 23:50:25.451111 75 log.go:172] (0xc000764be0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0510 23:50:25.451994 75 log.go:172] (0xc000510d10) Data frame received for 5\nI0510 23:50:25.452021 75 log.go:172] (0xc000764be0) (5) Data frame handling\nI0510 23:50:25.452054 75 log.go:172] (0xc000510d10) Data frame received for 3\nI0510 23:50:25.452099 75 log.go:172] (0xc000764640) (3) Data frame handling\nI0510 23:50:25.455338 75 log.go:172] (0xc000510d10) Data frame received for 1\nI0510 23:50:25.455787 75 log.go:172] (0xc00041cd20) (1) Data frame handling\nI0510 23:50:25.455852 75 log.go:172] (0xc00041cd20) (1) Data frame sent\nI0510 23:50:25.455954 75 log.go:172] (0xc000510d10) (0xc00041cd20) Stream removed, broadcasting: 1\nI0510 23:50:25.456025 75 log.go:172] (0xc000510d10) Go away received\nI0510 23:50:25.456722 75 log.go:172] (0xc000510d10) (0xc00041cd20) Stream removed, broadcasting: 1\nI0510 23:50:25.456771 75 log.go:172] (0xc000510d10) (0xc000764640) Stream removed, broadcasting: 3\nI0510 23:50:25.456798 75 log.go:172] (0xc000510d10) (0xc000764be0) Stream removed, broadcasting: 5\n" May 10 23:50:25.461: INFO: stdout: "" May 10 23:50:25.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8520 execpodjp74w -- /bin/sh -x -c nc -zv -t -w 2 10.101.169.49 80' May 10 23:50:25.691: INFO: stderr: "I0510 23:50:25.591977 95 log.go:172] (0xc000946000) (0xc0004c8c80) Create stream\nI0510 23:50:25.592060 95 log.go:172] (0xc000946000) (0xc0004c8c80) Stream added, broadcasting: 1\nI0510 23:50:25.595661 95 log.go:172] (0xc000946000) Reply frame received for 1\nI0510 23:50:25.595717 95 log.go:172] (0xc000946000) (0xc0000ddea0) Create stream\nI0510 23:50:25.595734 95 log.go:172] (0xc000946000) (0xc0000ddea0) Stream added, broadcasting: 3\nI0510 23:50:25.596864 95 log.go:172] (0xc000946000) Reply frame received for 3\nI0510 23:50:25.596918 95 log.go:172] (0xc000946000) (0xc000520140) Create stream\nI0510 23:50:25.596944 95 log.go:172] (0xc000946000) (0xc000520140) Stream added, broadcasting: 5\nI0510 23:50:25.598231 95 log.go:172] (0xc000946000) Reply frame received for 5\nI0510 23:50:25.683273 95 log.go:172] (0xc000946000) Data frame received for 3\nI0510 23:50:25.683297 95 log.go:172] (0xc0000ddea0) (3) Data frame handling\nI0510 23:50:25.683322 95 log.go:172] (0xc000946000) Data frame received for 5\nI0510 23:50:25.683355 95 log.go:172] (0xc000520140) (5) Data frame handling\nI0510 23:50:25.683390 95 log.go:172] (0xc000520140) (5) Data frame sent\nI0510 23:50:25.683409 95 log.go:172] (0xc000946000) Data frame received for 5\nI0510 23:50:25.683425 95 log.go:172] (0xc000520140) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.169.49 80\nConnection to 10.101.169.49 80 port [tcp/http] succeeded!\nI0510 23:50:25.685682 95 log.go:172] (0xc000946000) Data frame received for 1\nI0510 23:50:25.685704 95 log.go:172] (0xc0004c8c80) (1) Data frame handling\nI0510 23:50:25.685720 95 log.go:172] (0xc0004c8c80) (1) Data frame sent\nI0510 23:50:25.685732 95 log.go:172] (0xc000946000) (0xc0004c8c80) Stream removed, broadcasting: 1\nI0510 23:50:25.685752 95 log.go:172] (0xc000946000) Go away received\nI0510 23:50:25.686196 95 log.go:172] (0xc000946000) (0xc0004c8c80) Stream removed, broadcasting: 1\nI0510 23:50:25.686243 95 log.go:172] (0xc000946000) (0xc0000ddea0) Stream removed, broadcasting: 3\nI0510 23:50:25.686266 95 log.go:172] (0xc000946000) (0xc000520140) Stream removed, broadcasting: 5\n" May 10 23:50:25.691: INFO: stdout: "" May 10 23:50:25.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8520 execpodjp74w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32496' May 10 23:50:25.896: INFO: stderr: "I0510 23:50:25.820945 116 log.go:172] (0xc0009df970) (0xc000ba6280) Create stream\nI0510 23:50:25.821017 116 log.go:172] (0xc0009df970) (0xc000ba6280) Stream added, broadcasting: 1\nI0510 23:50:25.826460 116 log.go:172] (0xc0009df970) Reply frame received for 1\nI0510 23:50:25.826507 116 log.go:172] (0xc0009df970) (0xc0008346e0) Create stream\nI0510 23:50:25.826519 116 log.go:172] (0xc0009df970) (0xc0008346e0) Stream added, broadcasting: 3\nI0510 23:50:25.827523 116 log.go:172] (0xc0009df970) Reply frame received for 3\nI0510 23:50:25.827603 116 log.go:172] (0xc0009df970) (0xc000548e60) Create stream\nI0510 23:50:25.827628 116 log.go:172] (0xc0009df970) (0xc000548e60) Stream added, broadcasting: 5\nI0510 23:50:25.828886 116 log.go:172] (0xc0009df970) Reply frame received for 5\nI0510 23:50:25.888372 116 log.go:172] (0xc0009df970) Data frame received for 5\nI0510 23:50:25.888410 116 log.go:172] (0xc000548e60) (5) Data frame handling\nI0510 23:50:25.888432 116 log.go:172] (0xc000548e60) (5) Data frame sent\nI0510 23:50:25.888456 116 log.go:172] (0xc0009df970) Data frame received for 5\nI0510 23:50:25.888467 116 log.go:172] (0xc000548e60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32496\nConnection to 172.17.0.13 32496 port [tcp/32496] succeeded!\nI0510 23:50:25.888504 116 log.go:172] (0xc0009df970) Data frame received for 3\nI0510 23:50:25.888515 116 log.go:172] (0xc0008346e0) (3) Data frame handling\nI0510 23:50:25.889951 116 log.go:172] (0xc0009df970) Data frame received for 1\nI0510 23:50:25.889983 116 log.go:172] (0xc000ba6280) (1) Data frame handling\nI0510 23:50:25.890004 116 log.go:172] (0xc000ba6280) (1) Data frame sent\nI0510 23:50:25.890024 116 log.go:172] (0xc0009df970) (0xc000ba6280) Stream removed, broadcasting: 1\nI0510 23:50:25.890429 116 log.go:172] (0xc0009df970) (0xc000ba6280) Stream removed, broadcasting: 1\nI0510 23:50:25.890455 116 log.go:172] (0xc0009df970) (0xc0008346e0) Stream removed, broadcasting: 3\nI0510 23:50:25.890666 116 log.go:172] (0xc0009df970) (0xc000548e60) Stream removed, broadcasting: 5\n" May 10 23:50:25.896: INFO: stdout: "" May 10 23:50:25.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8520 execpodjp74w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32496' May 10 23:50:26.105: INFO: stderr: "I0510 23:50:26.026590 138 log.go:172] (0xc0006220b0) (0xc0005e41e0) Create stream\nI0510 23:50:26.026654 138 log.go:172] (0xc0006220b0) (0xc0005e41e0) Stream added, broadcasting: 1\nI0510 23:50:26.034389 138 log.go:172] (0xc0006220b0) Reply frame received for 1\nI0510 23:50:26.034470 138 log.go:172] (0xc0006220b0) (0xc000256140) Create stream\nI0510 23:50:26.034498 138 log.go:172] (0xc0006220b0) (0xc000256140) Stream added, broadcasting: 3\nI0510 23:50:26.035260 138 log.go:172] (0xc0006220b0) Reply frame received for 3\nI0510 23:50:26.035282 138 log.go:172] (0xc0006220b0) (0xc0005e4780) Create stream\nI0510 23:50:26.035290 138 log.go:172] (0xc0006220b0) (0xc0005e4780) Stream added, broadcasting: 5\nI0510 23:50:26.035982 138 log.go:172] (0xc0006220b0) Reply frame received for 5\nI0510 23:50:26.098158 138 log.go:172] (0xc0006220b0) Data frame received for 3\nI0510 23:50:26.098296 138 log.go:172] (0xc000256140) (3) Data frame handling\nI0510 23:50:26.098352 138 log.go:172] (0xc0006220b0) Data frame received for 5\nI0510 23:50:26.098378 138 log.go:172] (0xc0005e4780) (5) Data frame handling\nI0510 23:50:26.098398 138 log.go:172] (0xc0005e4780) (5) Data frame sent\nI0510 23:50:26.098415 138 log.go:172] (0xc0006220b0) Data frame received for 5\nI0510 23:50:26.098443 138 log.go:172] (0xc0005e4780) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32496\nConnection to 172.17.0.12 32496 port [tcp/32496] succeeded!\nI0510 23:50:26.099574 138 log.go:172] (0xc0006220b0) Data frame received for 1\nI0510 23:50:26.099612 138 log.go:172] (0xc0005e41e0) (1) Data frame handling\nI0510 23:50:26.099626 138 log.go:172] (0xc0005e41e0) (1) Data frame sent\nI0510 23:50:26.099639 138 log.go:172] (0xc0006220b0) (0xc0005e41e0) Stream removed, broadcasting: 1\nI0510 23:50:26.099653 138 log.go:172] (0xc0006220b0) Go away received\nI0510 23:50:26.100169 138 log.go:172] (0xc0006220b0) (0xc0005e41e0) Stream removed, broadcasting: 1\nI0510 23:50:26.100213 138 log.go:172] (0xc0006220b0) (0xc000256140) Stream removed, broadcasting: 3\nI0510 23:50:26.100239 138 log.go:172] (0xc0006220b0) (0xc0005e4780) Stream removed, broadcasting: 5\n" May 10 23:50:26.105: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:26.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8520" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.158 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":8,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:26.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 23:50:26.898: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 23:50:29.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751426, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751426, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751427, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751426, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 23:50:31.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751426, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751426, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751427, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751426, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 23:50:34.061: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:34.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2006" for this suite. STEP: Destroying namespace "webhook-2006-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.028 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":9,"skipped":179,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:35.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 10 23:50:40.112: INFO: Successfully updated pod "labelsupdate478a501b-a4ef-4e98-ac2c-04b70d97122d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:44.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7036" for this suite. • [SLOW TEST:9.027 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:44.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-2906/configmap-test-d5e3d290-973a-4101-af76-bdbedaa44546 STEP: Creating a pod to test consume configMaps May 10 23:50:44.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90" in namespace "configmap-2906" to be "Succeeded or Failed" May 10 23:50:44.322: INFO: Pod "pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51169ms May 10 23:50:46.326: INFO: Pod "pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010851691s May 10 23:50:48.348: INFO: Pod "pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03289971s May 10 23:50:50.354: INFO: Pod "pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038009536s STEP: Saw pod success May 10 23:50:50.354: INFO: Pod "pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90" satisfied condition "Succeeded or Failed" May 10 23:50:50.357: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90 container env-test: STEP: delete the pod May 10 23:50:50.446: INFO: Waiting for pod pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90 to disappear May 10 23:50:50.460: INFO: Pod pod-configmaps-dd6bb86b-c908-4467-9bde-196dd4649f90 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:50.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2906" for this suite. • [SLOW TEST:6.299 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":11,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:50.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 10 23:50:54.694: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:54.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3700" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:54.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 10 23:50:54.958: INFO: Waiting up to 5m0s for pod "pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d" in namespace "emptydir-4906" to be "Succeeded or Failed" May 10 23:50:54.961: INFO: Pod "pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536371ms May 10 23:50:56.965: INFO: Pod "pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007383613s May 10 23:50:58.970: INFO: Pod "pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012070752s STEP: Saw pod success May 10 23:50:58.970: INFO: Pod "pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d" satisfied condition "Succeeded or Failed" May 10 23:50:58.973: INFO: Trying to get logs from node latest-worker pod pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d container test-container: STEP: delete the pod May 10 23:50:59.009: INFO: Waiting for pod pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d to disappear May 10 23:50:59.017: INFO: Pod pod-d43c67e9-de54-4f30-abb7-22f3c0b1a75d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:50:59.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4906" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:50:59.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 10 23:50:59.126: INFO: PodSpec: initContainers in spec.initContainers May 10 23:51:45.667: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6e09f00f-b3f0-4674-9700-f15464e4ce3e", GenerateName:"", Namespace:"init-container-9350", SelfLink:"/api/v1/namespaces/init-container-9350/pods/pod-init-6e09f00f-b3f0-4674-9700-f15464e4ce3e", UID:"4dabd144-534e-4041-8f20-7c538f1a15df", ResourceVersion:"3204844", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724751459, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"126450080"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001cb9ae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cb9b20)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001cb9b60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cb9ba0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-srt54", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000ed27c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-srt54", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-srt54", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-srt54", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c69728), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00201d880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c69850)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c69870)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c69878), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c6987c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751459, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751459, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751459, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751459, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.82", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.82"}}, StartTime:(*v1.Time)(0xc001cb9be0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00201d960)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00201d9d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://64a735a4af233f49f4c4803bd70a7eb60286f0e741937e1447ff15eca0bc8ff0", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cb9c60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cb9c20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002c6990f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:51:45.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9350" for this suite. • [SLOW TEST:46.729 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":14,"skipped":290,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:51:45.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 10 23:51:45.880: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 10 23:51:45.887: INFO: Number of nodes with available pods: 0 May 10 23:51:45.887: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 10 23:51:45.949: INFO: Number of nodes with available pods: 0 May 10 23:51:45.949: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:46.953: INFO: Number of nodes with available pods: 0 May 10 23:51:46.953: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:47.954: INFO: Number of nodes with available pods: 0 May 10 23:51:47.954: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:48.954: INFO: Number of nodes with available pods: 0 May 10 23:51:48.954: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:49.954: INFO: Number of nodes with available pods: 1 May 10 23:51:49.954: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 10 23:51:49.992: INFO: Number of nodes with available pods: 1 May 10 23:51:49.992: INFO: Number of running nodes: 0, number of available pods: 1 May 10 23:51:50.995: INFO: Number of nodes with available pods: 0 May 10 23:51:50.995: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 10 23:51:51.037: INFO: Number of nodes with available pods: 0 May 10 23:51:51.037: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:52.043: INFO: Number of nodes with available pods: 0 May 10 23:51:52.043: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:53.042: INFO: Number of nodes with available pods: 0 May 10 23:51:53.042: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:54.042: INFO: Number of nodes with available pods: 0 May 10 23:51:54.042: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:55.042: INFO: Number of nodes with available pods: 0 May 10 23:51:55.042: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:56.042: INFO: Number of nodes with available pods: 0 May 10 23:51:56.042: INFO: Node latest-worker is running more than one daemon pod May 10 23:51:57.042: INFO: Number of nodes with available pods: 1 May 10 23:51:57.042: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8149, will wait for the garbage collector to delete the pods May 10 23:51:57.108: INFO: Deleting DaemonSet.extensions daemon-set took: 6.458766ms May 10 23:51:57.409: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.461874ms May 10 23:52:04.911: INFO: Number of nodes with available pods: 0 May 10 23:52:04.911: INFO: Number of running nodes: 0, number of available pods: 0 May 10 23:52:04.915: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8149/daemonsets","resourceVersion":"3204964"},"items":null} May 10 23:52:04.936: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8149/pods","resourceVersion":"3204964"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:52:04.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8149" for this suite. • [SLOW TEST:19.242 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":15,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:52:04.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 10 23:52:05.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595" in namespace "projected-3322" to be "Succeeded or Failed" May 10 23:52:05.126: INFO: Pod "downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595": Phase="Pending", Reason="", readiness=false. Elapsed: 40.78906ms May 10 23:52:07.131: INFO: Pod "downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045511946s May 10 23:52:09.135: INFO: Pod "downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049793292s STEP: Saw pod success May 10 23:52:09.135: INFO: Pod "downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595" satisfied condition "Succeeded or Failed" May 10 23:52:09.138: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595 container client-container: STEP: delete the pod May 10 23:52:09.212: INFO: Waiting for pod downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595 to disappear May 10 23:52:09.229: INFO: Pod downwardapi-volume-cb2a4c00-550c-4ec7-aa25-aa8d9b350595 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:52:09.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3322" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":16,"skipped":315,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:52:09.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 10 23:52:17.389: INFO: 10 pods remaining May 10 23:52:17.390: INFO: 10 pods has nil DeletionTimestamp May 10 23:52:17.390: INFO: May 10 23:52:19.439: INFO: 0 pods remaining May 10 23:52:19.439: INFO: 0 pods has nil DeletionTimestamp May 10 23:52:19.439: INFO: May 10 23:52:19.997: INFO: 0 pods remaining May 10 23:52:19.997: INFO: 0 pods has nil DeletionTimestamp May 10 23:52:19.997: INFO: STEP: Gathering metrics W0510 23:52:21.579712 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 10 23:52:21.579: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:52:21.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3276" for this suite. • [SLOW TEST:12.375 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":17,"skipped":326,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:52:21.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 10 23:52:22.603: INFO: Waiting up to 5m0s for pod "downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831" in namespace "downward-api-4865" to be "Succeeded or Failed" May 10 23:52:22.606: INFO: Pod "downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448575ms May 10 23:52:24.611: INFO: Pod "downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008091147s May 10 23:52:26.616: INFO: Pod "downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013173234s May 10 23:52:28.620: INFO: Pod "downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017062119s STEP: Saw pod success May 10 23:52:28.620: INFO: Pod "downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831" satisfied condition "Succeeded or Failed" May 10 23:52:28.623: INFO: Trying to get logs from node latest-worker2 pod downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831 container dapi-container: STEP: delete the pod May 10 23:52:28.663: INFO: Waiting for pod downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831 to disappear May 10 23:52:28.686: INFO: Pod downward-api-9ec61ac1-7cf4-46da-9685-ce2ca12c8831 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:52:28.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4865" for this suite. • [SLOW TEST:7.081 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":330,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:52:28.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:52:39.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5473" for this suite. • [SLOW TEST:11.220 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":19,"skipped":330,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:52:39.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-7872 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7872 STEP: Deleting pre-stop pod May 10 23:52:53.127: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:52:53.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7872" for this suite. • [SLOW TEST:13.266 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":20,"skipped":330,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:52:53.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-lm99 STEP: Creating a pod to test atomic-volume-subpath May 10 23:52:53.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lm99" in namespace "subpath-6737" to be "Succeeded or Failed" May 10 23:52:53.297: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Pending", Reason="", readiness=false. Elapsed: 40.714326ms May 10 23:52:55.302: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045698103s May 10 23:52:57.314: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 4.057269783s May 10 23:52:59.317: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 6.060941671s May 10 23:53:01.336: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 8.079133331s May 10 23:53:03.340: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 10.083621467s May 10 23:53:05.345: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 12.088938485s May 10 23:53:07.350: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 14.093705344s May 10 23:53:09.354: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 16.097606236s May 10 23:53:11.358: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 18.102087574s May 10 23:53:13.363: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 20.106784103s May 10 23:53:15.367: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Running", Reason="", readiness=true. Elapsed: 22.110846068s May 10 23:53:17.371: INFO: Pod "pod-subpath-test-secret-lm99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.114408321s STEP: Saw pod success May 10 23:53:17.371: INFO: Pod "pod-subpath-test-secret-lm99" satisfied condition "Succeeded or Failed" May 10 23:53:17.374: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-lm99 container test-container-subpath-secret-lm99: STEP: delete the pod May 10 23:53:17.425: INFO: Waiting for pod pod-subpath-test-secret-lm99 to disappear May 10 23:53:17.432: INFO: Pod pod-subpath-test-secret-lm99 no longer exists STEP: Deleting pod pod-subpath-test-secret-lm99 May 10 23:53:17.432: INFO: Deleting pod "pod-subpath-test-secret-lm99" in namespace "subpath-6737" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:53:17.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6737" for this suite. • [SLOW TEST:24.261 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":21,"skipped":330,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:53:17.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:53:34.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7447" for this suite. • [SLOW TEST:17.162 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":22,"skipped":344,"failed":0} [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:53:34.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 10 23:53:34.690: INFO: Waiting up to 5m0s for pod "pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166" in namespace "emptydir-5627" to be "Succeeded or Failed" May 10 23:53:34.706: INFO: Pod "pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166": Phase="Pending", Reason="", readiness=false. Elapsed: 15.805188ms May 10 23:53:36.710: INFO: Pod "pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019753124s May 10 23:53:38.714: INFO: Pod "pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166": Phase="Running", Reason="", readiness=true. Elapsed: 4.024110861s May 10 23:53:40.719: INFO: Pod "pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028856851s STEP: Saw pod success May 10 23:53:40.719: INFO: Pod "pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166" satisfied condition "Succeeded or Failed" May 10 23:53:40.722: INFO: Trying to get logs from node latest-worker pod pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166 container test-container: STEP: delete the pod May 10 23:53:40.807: INFO: Waiting for pod pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166 to disappear May 10 23:53:40.810: INFO: Pod pod-9a137d10-8a5a-4100-b4e5-ca2e3c6e6166 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:53:40.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5627" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:53:40.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 23:53:41.584: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 23:53:43.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 23:53:45.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751621, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 23:53:48.986: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:53:49.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1585" for this suite. STEP: Destroying namespace "webhook-1585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.905 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":24,"skipped":445,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:53:49.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-389b2503-73a3-43cc-9376-3a56fb1b0b00 STEP: Creating a pod to test consume secrets May 10 23:53:49.811: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa" in namespace "projected-7264" to be "Succeeded or Failed" May 10 23:53:49.832: INFO: Pod "pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 21.486557ms May 10 23:53:51.836: INFO: Pod "pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025903252s May 10 23:53:53.842: INFO: Pod "pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031442309s STEP: Saw pod success May 10 23:53:53.842: INFO: Pod "pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa" satisfied condition "Succeeded or Failed" May 10 23:53:53.845: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa container secret-volume-test: STEP: delete the pod May 10 23:53:53.888: INFO: Waiting for pod pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa to disappear May 10 23:53:53.920: INFO: Pod pod-projected-secrets-851a333b-6a73-438f-9d60-fe3e4086d3aa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:53:53.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7264" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":25,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:53:53.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 10 23:53:54.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 10 23:53:54.257: INFO: stderr: "" May 10 23:53:54.257: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:53:54.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6374" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":26,"skipped":464,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:53:54.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 10 23:53:54.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3259' May 10 23:53:54.760: INFO: stderr: "" May 10 23:53:54.760: INFO: stdout: "pod/pause created\n" May 10 23:53:54.760: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 10 23:53:54.760: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3259" to be "running and ready" May 10 23:53:54.818: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 58.173005ms May 10 23:53:57.070: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309916808s May 10 23:53:59.075: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.315375197s May 10 23:53:59.075: INFO: Pod "pause" satisfied condition "running and ready" May 10 23:53:59.076: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 10 23:53:59.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3259' May 10 23:53:59.183: INFO: stderr: "" May 10 23:53:59.183: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 10 23:53:59.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3259' May 10 23:53:59.306: INFO: stderr: "" May 10 23:53:59.306: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 10 23:53:59.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3259' May 10 23:53:59.428: INFO: stderr: "" May 10 23:53:59.428: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 10 23:53:59.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3259' May 10 23:53:59.526: INFO: stderr: "" May 10 23:53:59.526: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 10 23:53:59.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3259' May 10 23:53:59.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 23:53:59.650: INFO: stdout: "pod \"pause\" force deleted\n" May 10 23:53:59.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3259' May 10 23:53:59.746: INFO: stderr: "No resources found in kubectl-3259 namespace.\n" May 10 23:53:59.746: INFO: stdout: "" May 10 23:53:59.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3259 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 10 23:53:59.837: INFO: stderr: "" May 10 23:53:59.837: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:53:59.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3259" for this suite. • [SLOW TEST:5.567 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":27,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:53:59.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-7403/configmap-test-9c5133ae-600d-4375-bfad-42aa8b3b8b29 STEP: Creating a pod to test consume configMaps May 10 23:54:00.164: INFO: Waiting up to 5m0s for pod "pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e" in namespace "configmap-7403" to be "Succeeded or Failed" May 10 23:54:00.219: INFO: Pod "pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 54.977561ms May 10 23:54:02.222: INFO: Pod "pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058613714s May 10 23:54:04.227: INFO: Pod "pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063268909s STEP: Saw pod success May 10 23:54:04.227: INFO: Pod "pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e" satisfied condition "Succeeded or Failed" May 10 23:54:04.230: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e container env-test: STEP: delete the pod May 10 23:54:04.279: INFO: Waiting for pod pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e to disappear May 10 23:54:04.294: INFO: Pod pod-configmaps-c63ded2a-7ccf-480f-a719-f184cc541e4e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:04.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7403" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":506,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:04.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412 May 10 23:54:04.427: INFO: Pod name my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412: Found 0 pods out of 1 May 10 23:54:09.441: INFO: Pod name my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412: Found 1 pods out of 1 May 10 23:54:09.442: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412" are running May 10 23:54:09.447: INFO: Pod "my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412-4hbxb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 23:54:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 23:54:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 23:54:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 23:54:04 +0000 UTC Reason: Message:}]) May 10 23:54:09.448: INFO: Trying to dial the pod May 10 23:54:14.460: INFO: Controller my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412: Got expected result from replica 1 [my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412-4hbxb]: "my-hostname-basic-c6e8dd72-acf3-4fa0-8beb-8dd102793412-4hbxb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:14.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8770" for this suite. • [SLOW TEST:10.166 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":29,"skipped":511,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:14.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-e7c9676d-19f3-44b4-b7c6-45c14fee272c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:14.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1096" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":30,"skipped":514,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:14.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:18.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3496" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:18.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 10 23:54:18.864: INFO: Waiting up to 5m0s for pod "pod-e7efe28d-4882-4130-8aae-9ffa8735960b" in namespace "emptydir-5429" to be "Succeeded or Failed" May 10 23:54:18.869: INFO: Pod "pod-e7efe28d-4882-4130-8aae-9ffa8735960b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.225344ms May 10 23:54:20.888: INFO: Pod "pod-e7efe28d-4882-4130-8aae-9ffa8735960b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023758189s May 10 23:54:22.892: INFO: Pod "pod-e7efe28d-4882-4130-8aae-9ffa8735960b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028086688s May 10 23:54:24.896: INFO: Pod "pod-e7efe28d-4882-4130-8aae-9ffa8735960b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031983907s STEP: Saw pod success May 10 23:54:24.896: INFO: Pod "pod-e7efe28d-4882-4130-8aae-9ffa8735960b" satisfied condition "Succeeded or Failed" May 10 23:54:24.902: INFO: Trying to get logs from node latest-worker2 pod pod-e7efe28d-4882-4130-8aae-9ffa8735960b container test-container: STEP: delete the pod May 10 23:54:24.918: INFO: Waiting for pod pod-e7efe28d-4882-4130-8aae-9ffa8735960b to disappear May 10 23:54:24.923: INFO: Pod pod-e7efe28d-4882-4130-8aae-9ffa8735960b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:24.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5429" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":32,"skipped":542,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:24.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 10 23:54:25.145: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 10 23:54:25.155: INFO: Waiting for terminating namespaces to be deleted... May 10 23:54:25.158: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 10 23:54:25.163: INFO: client-containers-7ab34f20-2294-4eaa-80c2-b9deedc6d6f5 from containers-3496 started at 2020-05-10 23:54:14 +0000 UTC (1 container statuses recorded) May 10 23:54:25.163: INFO: Container test-container ready: true, restart count 0 May 10 23:54:25.163: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 10 23:54:25.163: INFO: Container kindnet-cni ready: true, restart count 0 May 10 23:54:25.163: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 10 23:54:25.163: INFO: Container kube-proxy ready: true, restart count 0 May 10 23:54:25.163: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 10 23:54:25.168: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 10 23:54:25.168: INFO: Container kindnet-cni ready: true, restart count 0 May 10 23:54:25.168: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 10 23:54:25.168: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160dcfee2261272b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160dcfee26644dd0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:26.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3033" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":33,"skipped":548,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:26.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 10 23:54:26.324: INFO: Waiting up to 5m0s for pod "pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3" in namespace "emptydir-480" to be "Succeeded or Failed" May 10 23:54:26.328: INFO: Pod "pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.663428ms May 10 23:54:28.431: INFO: Pod "pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106412695s May 10 23:54:30.442: INFO: Pod "pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117960462s STEP: Saw pod success May 10 23:54:30.442: INFO: Pod "pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3" satisfied condition "Succeeded or Failed" May 10 23:54:30.444: INFO: Trying to get logs from node latest-worker pod pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3 container test-container: STEP: delete the pod May 10 23:54:30.485: INFO: Waiting for pod pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3 to disappear May 10 23:54:30.514: INFO: Pod pod-67d2934e-8564-40cb-976d-a1f4cd8bc0c3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:30.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-480" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":570,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:30.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 10 23:54:30.667: INFO: namespace kubectl-254 May 10 23:54:30.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-254' May 10 23:54:30.925: INFO: stderr: "" May 10 23:54:30.925: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 10 23:54:31.929: INFO: Selector matched 1 pods for map[app:agnhost] May 10 23:54:31.929: INFO: Found 0 / 1 May 10 23:54:33.153: INFO: Selector matched 1 pods for map[app:agnhost] May 10 23:54:33.153: INFO: Found 0 / 1 May 10 23:54:33.929: INFO: Selector matched 1 pods for map[app:agnhost] May 10 23:54:33.929: INFO: Found 0 / 1 May 10 23:54:34.929: INFO: Selector matched 1 pods for map[app:agnhost] May 10 23:54:34.929: INFO: Found 1 / 1 May 10 23:54:34.929: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 10 23:54:34.962: INFO: Selector matched 1 pods for map[app:agnhost] May 10 23:54:34.962: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 10 23:54:34.962: INFO: wait on agnhost-master startup in kubectl-254 May 10 23:54:34.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-8n8pn agnhost-master --namespace=kubectl-254' May 10 23:54:35.082: INFO: stderr: "" May 10 23:54:35.082: INFO: stdout: "Paused\n" STEP: exposing RC May 10 23:54:35.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-254' May 10 23:54:35.249: INFO: stderr: "" May 10 23:54:35.249: INFO: stdout: "service/rm2 exposed\n" May 10 23:54:35.303: INFO: Service rm2 in namespace kubectl-254 found. STEP: exposing service May 10 23:54:37.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-254' May 10 23:54:37.460: INFO: stderr: "" May 10 23:54:37.460: INFO: stdout: "service/rm3 exposed\n" May 10 23:54:37.492: INFO: Service rm3 in namespace kubectl-254 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:39.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-254" for this suite. • [SLOW TEST:8.909 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":35,"skipped":574,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:39.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 23:54:40.290: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 23:54:42.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 23:54:44.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751680, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 23:54:47.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:47.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3529" for this suite. STEP: Destroying namespace "webhook-3529-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.200 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":36,"skipped":585,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:47.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-ee9e0bd0-2e83-4211-b905-9fdc06498e07 STEP: Creating a pod to test consume secrets May 10 23:54:48.189: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c" in namespace "projected-2227" to be "Succeeded or Failed" May 10 23:54:48.196: INFO: Pod "pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2163ms May 10 23:54:50.200: INFO: Pod "pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010208416s May 10 23:54:52.204: INFO: Pod "pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014739569s STEP: Saw pod success May 10 23:54:52.204: INFO: Pod "pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c" satisfied condition "Succeeded or Failed" May 10 23:54:52.208: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c container projected-secret-volume-test: STEP: delete the pod May 10 23:54:52.377: INFO: Waiting for pod pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c to disappear May 10 23:54:52.402: INFO: Pod pod-projected-secrets-0fea5304-c7c4-40bb-b8a4-1415d693c74c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:52.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2227" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:52.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 10 23:54:52.549: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-5a56127d-c1f3-4a87-af76-9e51459905b1" in namespace "security-context-test-8665" to be "Succeeded or Failed" May 10 23:54:52.576: INFO: Pod "alpine-nnp-false-5a56127d-c1f3-4a87-af76-9e51459905b1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.193878ms May 10 23:54:54.579: INFO: Pod "alpine-nnp-false-5a56127d-c1f3-4a87-af76-9e51459905b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030610503s May 10 23:54:56.583: INFO: Pod "alpine-nnp-false-5a56127d-c1f3-4a87-af76-9e51459905b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034404682s May 10 23:54:56.583: INFO: Pod "alpine-nnp-false-5a56127d-c1f3-4a87-af76-9e51459905b1" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:54:56.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8665" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":636,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:54:56.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 10 23:54:56.723: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:54:56.744: INFO: Number of nodes with available pods: 0 May 10 23:54:56.744: INFO: Node latest-worker is running more than one daemon pod May 10 23:54:57.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:54:57.770: INFO: Number of nodes with available pods: 0 May 10 23:54:57.770: INFO: Node latest-worker is running more than one daemon pod May 10 23:54:58.801: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:54:58.805: INFO: Number of nodes with available pods: 0 May 10 23:54:58.805: INFO: Node latest-worker is running more than one daemon pod May 10 23:54:59.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:54:59.754: INFO: Number of nodes with available pods: 0 May 10 23:54:59.754: INFO: Node latest-worker is running more than one daemon pod May 10 23:55:00.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:55:00.754: INFO: Number of nodes with available pods: 0 May 10 23:55:00.754: INFO: Node latest-worker is running more than one daemon pod May 10 23:55:01.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:55:01.753: INFO: Number of nodes with available pods: 2 May 10 23:55:01.754: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 10 23:55:01.831: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:55:01.888: INFO: Number of nodes with available pods: 1 May 10 23:55:01.888: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:55:02.893: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:55:02.944: INFO: Number of nodes with available pods: 1 May 10 23:55:02.944: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:55:03.957: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:55:03.961: INFO: Number of nodes with available pods: 1 May 10 23:55:03.961: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:55:04.893: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:55:04.897: INFO: Number of nodes with available pods: 1 May 10 23:55:04.897: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:55:05.893: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:55:05.897: INFO: Number of nodes with available pods: 2 May 10 23:55:05.897: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4714, will wait for the garbage collector to delete the pods May 10 23:55:05.960: INFO: Deleting DaemonSet.extensions daemon-set took: 5.43194ms May 10 23:55:06.261: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.632403ms May 10 23:55:15.274: INFO: Number of nodes with available pods: 0 May 10 23:55:15.274: INFO: Number of running nodes: 0, number of available pods: 0 May 10 23:55:15.277: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4714/daemonsets","resourceVersion":"3206399"},"items":null} May 10 23:55:15.279: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4714/pods","resourceVersion":"3206399"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:55:15.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4714" for this suite. • [SLOW TEST:18.723 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":39,"skipped":652,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:55:15.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6312 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-6312 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6312 May 10 23:55:15.410: INFO: Found 0 stateful pods, waiting for 1 May 10 23:55:25.415: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 10 23:55:25.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6312 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 23:55:25.674: INFO: stderr: "I0510 23:55:25.567541 434 log.go:172] (0xc000abf600) (0xc000b0e500) Create stream\nI0510 23:55:25.567590 434 log.go:172] (0xc000abf600) (0xc000b0e500) Stream added, broadcasting: 1\nI0510 23:55:25.571303 434 log.go:172] (0xc000abf600) Reply frame received for 1\nI0510 23:55:25.571339 434 log.go:172] (0xc000abf600) (0xc000740f00) Create stream\nI0510 23:55:25.571348 434 log.go:172] (0xc000abf600) (0xc000740f00) Stream added, broadcasting: 3\nI0510 23:55:25.572185 434 log.go:172] (0xc000abf600) Reply frame received for 3\nI0510 23:55:25.572239 434 log.go:172] (0xc000abf600) (0xc0006145a0) Create stream\nI0510 23:55:25.572253 434 log.go:172] (0xc000abf600) (0xc0006145a0) Stream added, broadcasting: 5\nI0510 23:55:25.573086 434 log.go:172] (0xc000abf600) Reply frame received for 5\nI0510 23:55:25.639992 434 log.go:172] (0xc000abf600) Data frame received for 5\nI0510 23:55:25.640020 434 log.go:172] (0xc0006145a0) (5) Data frame handling\nI0510 23:55:25.640040 434 log.go:172] (0xc0006145a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 23:55:25.666779 434 log.go:172] (0xc000abf600) Data frame received for 3\nI0510 23:55:25.666809 434 log.go:172] (0xc000740f00) (3) Data frame handling\nI0510 23:55:25.666831 434 log.go:172] (0xc000740f00) (3) Data frame sent\nI0510 23:55:25.666847 434 log.go:172] (0xc000abf600) Data frame received for 3\nI0510 23:55:25.666862 434 log.go:172] (0xc000740f00) (3) Data frame handling\nI0510 23:55:25.667216 434 log.go:172] (0xc000abf600) Data frame received for 5\nI0510 23:55:25.667245 434 log.go:172] (0xc0006145a0) (5) Data frame handling\nI0510 23:55:25.668700 434 log.go:172] (0xc000abf600) Data frame received for 1\nI0510 23:55:25.668728 434 log.go:172] (0xc000b0e500) (1) Data frame handling\nI0510 23:55:25.668753 434 log.go:172] (0xc000b0e500) (1) Data frame sent\nI0510 23:55:25.668798 434 log.go:172] (0xc000abf600) (0xc000b0e500) Stream removed, broadcasting: 1\nI0510 23:55:25.668833 434 log.go:172] (0xc000abf600) Go away received\nI0510 23:55:25.669512 434 log.go:172] (0xc000abf600) (0xc000b0e500) Stream removed, broadcasting: 1\nI0510 23:55:25.669536 434 log.go:172] (0xc000abf600) (0xc000740f00) Stream removed, broadcasting: 3\nI0510 23:55:25.669548 434 log.go:172] (0xc000abf600) (0xc0006145a0) Stream removed, broadcasting: 5\n" May 10 23:55:25.674: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 23:55:25.674: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 23:55:25.698: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 10 23:55:35.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 10 23:55:35.703: INFO: Waiting for statefulset status.replicas updated to 0 May 10 23:55:35.718: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:55:35.718: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:55:35.718: INFO: May 10 23:55:35.718: INFO: StatefulSet ss has not reached scale 3, at 1 May 10 23:55:36.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993426565s May 10 23:55:37.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988387779s May 10 23:55:38.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982858915s May 10 23:55:39.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.760345771s May 10 23:55:40.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.75507225s May 10 23:55:41.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.74951586s May 10 23:55:42.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.744465938s May 10 23:55:43.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.738760751s May 10 23:55:44.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 733.409986ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6312 May 10 23:55:45.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 23:55:46.214: INFO: stderr: "I0510 23:55:46.119473 454 log.go:172] (0xc000996210) (0xc000550280) Create stream\nI0510 23:55:46.119574 454 log.go:172] (0xc000996210) (0xc000550280) Stream added, broadcasting: 1\nI0510 23:55:46.130593 454 log.go:172] (0xc000996210) Reply frame received for 1\nI0510 23:55:46.130717 454 log.go:172] (0xc000996210) (0xc00052e1e0) Create stream\nI0510 23:55:46.130757 454 log.go:172] (0xc000996210) (0xc00052e1e0) Stream added, broadcasting: 3\nI0510 23:55:46.133059 454 log.go:172] (0xc000996210) Reply frame received for 3\nI0510 23:55:46.133105 454 log.go:172] (0xc000996210) (0xc000328500) Create stream\nI0510 23:55:46.133368 454 log.go:172] (0xc000996210) (0xc000328500) Stream added, broadcasting: 5\nI0510 23:55:46.134532 454 log.go:172] (0xc000996210) Reply frame received for 5\nI0510 23:55:46.207299 454 log.go:172] (0xc000996210) Data frame received for 3\nI0510 23:55:46.207334 454 log.go:172] (0xc00052e1e0) (3) Data frame handling\nI0510 23:55:46.207372 454 log.go:172] (0xc00052e1e0) (3) Data frame sent\nI0510 23:55:46.207392 454 log.go:172] (0xc000996210) Data frame received for 3\nI0510 23:55:46.207409 454 log.go:172] (0xc00052e1e0) (3) Data frame handling\nI0510 23:55:46.207438 454 log.go:172] (0xc000996210) Data frame received for 5\nI0510 23:55:46.207453 454 log.go:172] (0xc000328500) (5) Data frame handling\nI0510 23:55:46.207464 454 log.go:172] (0xc000328500) (5) Data frame sent\nI0510 23:55:46.207471 454 log.go:172] (0xc000996210) Data frame received for 5\nI0510 23:55:46.207476 454 log.go:172] (0xc000328500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 23:55:46.209232 454 log.go:172] (0xc000996210) Data frame received for 1\nI0510 23:55:46.209248 454 log.go:172] (0xc000550280) (1) Data frame handling\nI0510 23:55:46.209265 454 log.go:172] (0xc000550280) (1) Data frame sent\nI0510 23:55:46.209404 454 log.go:172] (0xc000996210) (0xc000550280) Stream removed, broadcasting: 1\nI0510 23:55:46.209472 454 log.go:172] (0xc000996210) Go away received\nI0510 23:55:46.209864 454 log.go:172] (0xc000996210) (0xc000550280) Stream removed, broadcasting: 1\nI0510 23:55:46.209887 454 log.go:172] (0xc000996210) (0xc00052e1e0) Stream removed, broadcasting: 3\nI0510 23:55:46.209904 454 log.go:172] (0xc000996210) (0xc000328500) Stream removed, broadcasting: 5\n" May 10 23:55:46.215: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 23:55:46.215: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 23:55:46.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6312 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 23:55:46.435: INFO: stderr: "I0510 23:55:46.347052 474 log.go:172] (0xc00095e4d0) (0xc000a96140) Create stream\nI0510 23:55:46.347129 474 log.go:172] (0xc00095e4d0) (0xc000a96140) Stream added, broadcasting: 1\nI0510 23:55:46.349916 474 log.go:172] (0xc00095e4d0) Reply frame received for 1\nI0510 23:55:46.349953 474 log.go:172] (0xc00095e4d0) (0xc00067ef00) Create stream\nI0510 23:55:46.349972 474 log.go:172] (0xc00095e4d0) (0xc00067ef00) Stream added, broadcasting: 3\nI0510 23:55:46.350795 474 log.go:172] (0xc00095e4d0) Reply frame received for 3\nI0510 23:55:46.350832 474 log.go:172] (0xc00095e4d0) (0xc000a961e0) Create stream\nI0510 23:55:46.350845 474 log.go:172] (0xc00095e4d0) (0xc000a961e0) Stream added, broadcasting: 5\nI0510 23:55:46.351717 474 log.go:172] (0xc00095e4d0) Reply frame received for 5\nI0510 23:55:46.429034 474 log.go:172] (0xc00095e4d0) Data frame received for 5\nI0510 23:55:46.429078 474 log.go:172] (0xc000a961e0) (5) Data frame handling\nI0510 23:55:46.429088 474 log.go:172] (0xc000a961e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0510 23:55:46.429229 474 log.go:172] (0xc00095e4d0) Data frame received for 3\nI0510 23:55:46.429245 474 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0510 23:55:46.429253 474 log.go:172] (0xc00067ef00) (3) Data frame sent\nI0510 23:55:46.429415 474 log.go:172] (0xc00095e4d0) Data frame received for 5\nI0510 23:55:46.429459 474 log.go:172] (0xc000a961e0) (5) Data frame handling\nI0510 23:55:46.429489 474 log.go:172] (0xc00095e4d0) Data frame received for 3\nI0510 23:55:46.429510 474 log.go:172] (0xc00067ef00) (3) Data frame handling\nI0510 23:55:46.431149 474 log.go:172] (0xc00095e4d0) Data frame received for 1\nI0510 23:55:46.431163 474 log.go:172] (0xc000a96140) (1) Data frame handling\nI0510 23:55:46.431171 474 log.go:172] (0xc000a96140) (1) Data frame sent\nI0510 23:55:46.431189 474 log.go:172] (0xc00095e4d0) (0xc000a96140) Stream removed, broadcasting: 1\nI0510 23:55:46.431226 474 log.go:172] (0xc00095e4d0) Go away received\nI0510 23:55:46.431427 474 log.go:172] (0xc00095e4d0) (0xc000a96140) Stream removed, broadcasting: 1\nI0510 23:55:46.431453 474 log.go:172] (0xc00095e4d0) (0xc00067ef00) Stream removed, broadcasting: 3\nI0510 23:55:46.431459 474 log.go:172] (0xc00095e4d0) (0xc000a961e0) Stream removed, broadcasting: 5\n" May 10 23:55:46.435: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 23:55:46.435: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 23:55:46.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6312 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 23:55:46.648: INFO: stderr: "I0510 23:55:46.575834 494 log.go:172] (0xc00003a420) (0xc0003035e0) Create stream\nI0510 23:55:46.575898 494 log.go:172] (0xc00003a420) (0xc0003035e0) Stream added, broadcasting: 1\nI0510 23:55:46.578127 494 log.go:172] (0xc00003a420) Reply frame received for 1\nI0510 23:55:46.578169 494 log.go:172] (0xc00003a420) (0xc0000dd900) Create stream\nI0510 23:55:46.578184 494 log.go:172] (0xc00003a420) (0xc0000dd900) Stream added, broadcasting: 3\nI0510 23:55:46.579124 494 log.go:172] (0xc00003a420) Reply frame received for 3\nI0510 23:55:46.579178 494 log.go:172] (0xc00003a420) (0xc00013b860) Create stream\nI0510 23:55:46.579196 494 log.go:172] (0xc00003a420) (0xc00013b860) Stream added, broadcasting: 5\nI0510 23:55:46.580203 494 log.go:172] (0xc00003a420) Reply frame received for 5\nI0510 23:55:46.642876 494 log.go:172] (0xc00003a420) Data frame received for 3\nI0510 23:55:46.642907 494 log.go:172] (0xc0000dd900) (3) Data frame handling\nI0510 23:55:46.642918 494 log.go:172] (0xc0000dd900) (3) Data frame sent\nI0510 23:55:46.642925 494 log.go:172] (0xc00003a420) Data frame received for 3\nI0510 23:55:46.642931 494 log.go:172] (0xc0000dd900) (3) Data frame handling\nI0510 23:55:46.642966 494 log.go:172] (0xc00003a420) Data frame received for 5\nI0510 23:55:46.642994 494 log.go:172] (0xc00013b860) (5) Data frame handling\nI0510 23:55:46.643012 494 log.go:172] (0xc00013b860) (5) Data frame sent\nI0510 23:55:46.643033 494 log.go:172] (0xc00003a420) Data frame received for 5\nI0510 23:55:46.643047 494 log.go:172] (0xc00013b860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0510 23:55:46.644118 494 log.go:172] (0xc00003a420) Data frame received for 1\nI0510 23:55:46.644137 494 log.go:172] (0xc0003035e0) (1) Data frame handling\nI0510 23:55:46.644149 494 log.go:172] (0xc0003035e0) (1) Data frame sent\nI0510 23:55:46.644162 494 log.go:172] (0xc00003a420) (0xc0003035e0) Stream removed, broadcasting: 1\nI0510 23:55:46.644181 494 log.go:172] (0xc00003a420) Go away received\nI0510 23:55:46.644447 494 log.go:172] (0xc00003a420) (0xc0003035e0) Stream removed, broadcasting: 1\nI0510 23:55:46.644460 494 log.go:172] (0xc00003a420) (0xc0000dd900) Stream removed, broadcasting: 3\nI0510 23:55:46.644467 494 log.go:172] (0xc00003a420) (0xc00013b860) Stream removed, broadcasting: 5\n" May 10 23:55:46.648: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 23:55:46.648: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 23:55:46.652: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 10 23:55:56.658: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 10 23:55:56.658: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 10 23:55:56.658: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 10 23:55:56.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6312 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 23:55:56.889: INFO: stderr: "I0510 23:55:56.797011 513 log.go:172] (0xc00098f550) (0xc000a983c0) Create stream\nI0510 23:55:56.797092 513 log.go:172] (0xc00098f550) (0xc000a983c0) Stream added, broadcasting: 1\nI0510 23:55:56.802043 513 log.go:172] (0xc00098f550) Reply frame received for 1\nI0510 23:55:56.802091 513 log.go:172] (0xc00098f550) (0xc000546280) Create stream\nI0510 23:55:56.802109 513 log.go:172] (0xc00098f550) (0xc000546280) Stream added, broadcasting: 3\nI0510 23:55:56.803005 513 log.go:172] (0xc00098f550) Reply frame received for 3\nI0510 23:55:56.803044 513 log.go:172] (0xc00098f550) (0xc000547040) Create stream\nI0510 23:55:56.803057 513 log.go:172] (0xc00098f550) (0xc000547040) Stream added, broadcasting: 5\nI0510 23:55:56.803844 513 log.go:172] (0xc00098f550) Reply frame received for 5\nI0510 23:55:56.883491 513 log.go:172] (0xc00098f550) Data frame received for 3\nI0510 23:55:56.883517 513 log.go:172] (0xc000546280) (3) Data frame handling\nI0510 23:55:56.883524 513 log.go:172] (0xc000546280) (3) Data frame sent\nI0510 23:55:56.883529 513 log.go:172] (0xc00098f550) Data frame received for 3\nI0510 23:55:56.883533 513 log.go:172] (0xc000546280) (3) Data frame handling\nI0510 23:55:56.883553 513 log.go:172] (0xc00098f550) Data frame received for 5\nI0510 23:55:56.883558 513 log.go:172] (0xc000547040) (5) Data frame handling\nI0510 23:55:56.883563 513 log.go:172] (0xc000547040) (5) Data frame sent\nI0510 23:55:56.883567 513 log.go:172] (0xc00098f550) Data frame received for 5\nI0510 23:55:56.883571 513 log.go:172] (0xc000547040) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 23:55:56.884387 513 log.go:172] (0xc00098f550) Data frame received for 1\nI0510 23:55:56.884400 513 log.go:172] (0xc000a983c0) (1) Data frame handling\nI0510 23:55:56.884406 513 log.go:172] (0xc000a983c0) (1) Data frame sent\nI0510 23:55:56.884420 513 log.go:172] (0xc00098f550) (0xc000a983c0) Stream removed, broadcasting: 1\nI0510 23:55:56.884444 513 log.go:172] (0xc00098f550) Go away received\nI0510 23:55:56.884721 513 log.go:172] (0xc00098f550) (0xc000a983c0) Stream removed, broadcasting: 1\nI0510 23:55:56.884736 513 log.go:172] (0xc00098f550) (0xc000546280) Stream removed, broadcasting: 3\nI0510 23:55:56.884743 513 log.go:172] (0xc00098f550) (0xc000547040) Stream removed, broadcasting: 5\n" May 10 23:55:56.889: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 23:55:56.889: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 23:55:56.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6312 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 23:55:57.116: INFO: stderr: "I0510 23:55:57.018187 533 log.go:172] (0xc000ade160) (0xc0006a0e60) Create stream\nI0510 23:55:57.018237 533 log.go:172] (0xc000ade160) (0xc0006a0e60) Stream added, broadcasting: 1\nI0510 23:55:57.020390 533 log.go:172] (0xc000ade160) Reply frame received for 1\nI0510 23:55:57.020417 533 log.go:172] (0xc000ade160) (0xc0005ddc20) Create stream\nI0510 23:55:57.020429 533 log.go:172] (0xc000ade160) (0xc0005ddc20) Stream added, broadcasting: 3\nI0510 23:55:57.021351 533 log.go:172] (0xc000ade160) Reply frame received for 3\nI0510 23:55:57.021405 533 log.go:172] (0xc000ade160) (0xc0006e4d20) Create stream\nI0510 23:55:57.021433 533 log.go:172] (0xc000ade160) (0xc0006e4d20) Stream added, broadcasting: 5\nI0510 23:55:57.022081 533 log.go:172] (0xc000ade160) Reply frame received for 5\nI0510 23:55:57.081788 533 log.go:172] (0xc000ade160) Data frame received for 5\nI0510 23:55:57.081814 533 log.go:172] (0xc0006e4d20) (5) Data frame handling\nI0510 23:55:57.081828 533 log.go:172] (0xc0006e4d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 23:55:57.108039 533 log.go:172] (0xc000ade160) Data frame received for 3\nI0510 23:55:57.108176 533 log.go:172] (0xc0005ddc20) (3) Data frame handling\nI0510 23:55:57.108266 533 log.go:172] (0xc0005ddc20) (3) Data frame sent\nI0510 23:55:57.108433 533 log.go:172] (0xc000ade160) Data frame received for 3\nI0510 23:55:57.108466 533 log.go:172] (0xc0005ddc20) (3) Data frame handling\nI0510 23:55:57.108846 533 log.go:172] (0xc000ade160) Data frame received for 5\nI0510 23:55:57.108879 533 log.go:172] (0xc0006e4d20) (5) Data frame handling\nI0510 23:55:57.110687 533 log.go:172] (0xc000ade160) Data frame received for 1\nI0510 23:55:57.110721 533 log.go:172] (0xc0006a0e60) (1) Data frame handling\nI0510 23:55:57.110745 533 log.go:172] (0xc0006a0e60) (1) Data frame sent\nI0510 23:55:57.110861 533 log.go:172] (0xc000ade160) (0xc0006a0e60) Stream removed, broadcasting: 1\nI0510 23:55:57.111066 533 log.go:172] (0xc000ade160) Go away received\nI0510 23:55:57.111477 533 log.go:172] (0xc000ade160) (0xc0006a0e60) Stream removed, broadcasting: 1\nI0510 23:55:57.111510 533 log.go:172] (0xc000ade160) (0xc0005ddc20) Stream removed, broadcasting: 3\nI0510 23:55:57.111532 533 log.go:172] (0xc000ade160) (0xc0006e4d20) Stream removed, broadcasting: 5\n" May 10 23:55:57.116: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 23:55:57.116: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 23:55:57.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6312 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 23:55:57.346: INFO: stderr: "I0510 23:55:57.231912 553 log.go:172] (0xc000a1e0b0) (0xc00052c320) Create stream\nI0510 23:55:57.232088 553 log.go:172] (0xc000a1e0b0) (0xc00052c320) Stream added, broadcasting: 1\nI0510 23:55:57.238278 553 log.go:172] (0xc000a1e0b0) Reply frame received for 1\nI0510 23:55:57.238330 553 log.go:172] (0xc000a1e0b0) (0xc000512e60) Create stream\nI0510 23:55:57.238346 553 log.go:172] (0xc000a1e0b0) (0xc000512e60) Stream added, broadcasting: 3\nI0510 23:55:57.241019 553 log.go:172] (0xc000a1e0b0) Reply frame received for 3\nI0510 23:55:57.241060 553 log.go:172] (0xc000a1e0b0) (0xc00052d2c0) Create stream\nI0510 23:55:57.241086 553 log.go:172] (0xc000a1e0b0) (0xc00052d2c0) Stream added, broadcasting: 5\nI0510 23:55:57.242070 553 log.go:172] (0xc000a1e0b0) Reply frame received for 5\nI0510 23:55:57.316418 553 log.go:172] (0xc000a1e0b0) Data frame received for 5\nI0510 23:55:57.316446 553 log.go:172] (0xc00052d2c0) (5) Data frame handling\nI0510 23:55:57.316466 553 log.go:172] (0xc00052d2c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 23:55:57.340358 553 log.go:172] (0xc000a1e0b0) Data frame received for 5\nI0510 23:55:57.340402 553 log.go:172] (0xc000a1e0b0) Data frame received for 3\nI0510 23:55:57.340441 553 log.go:172] (0xc000512e60) (3) Data frame handling\nI0510 23:55:57.340463 553 log.go:172] (0xc000512e60) (3) Data frame sent\nI0510 23:55:57.340481 553 log.go:172] (0xc000a1e0b0) Data frame received for 3\nI0510 23:55:57.340494 553 log.go:172] (0xc000512e60) (3) Data frame handling\nI0510 23:55:57.340525 553 log.go:172] (0xc00052d2c0) (5) Data frame handling\nI0510 23:55:57.342172 553 log.go:172] (0xc000a1e0b0) Data frame received for 1\nI0510 23:55:57.342189 553 log.go:172] (0xc00052c320) (1) Data frame handling\nI0510 23:55:57.342197 553 log.go:172] (0xc00052c320) (1) Data frame sent\nI0510 23:55:57.342208 553 log.go:172] (0xc000a1e0b0) (0xc00052c320) Stream removed, broadcasting: 1\nI0510 23:55:57.342219 553 log.go:172] (0xc000a1e0b0) Go away received\nI0510 23:55:57.342496 553 log.go:172] (0xc000a1e0b0) (0xc00052c320) Stream removed, broadcasting: 1\nI0510 23:55:57.342519 553 log.go:172] (0xc000a1e0b0) (0xc000512e60) Stream removed, broadcasting: 3\nI0510 23:55:57.342531 553 log.go:172] (0xc000a1e0b0) (0xc00052d2c0) Stream removed, broadcasting: 5\n" May 10 23:55:57.346: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 23:55:57.346: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 23:55:57.346: INFO: Waiting for statefulset status.replicas updated to 0 May 10 23:55:57.348: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 10 23:56:07.356: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 10 23:56:07.356: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 10 23:56:07.356: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 10 23:56:07.374: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:07.374: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:07.374: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:07.374: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:07.374: INFO: May 10 23:56:07.374: INFO: StatefulSet ss has not reached scale 0, at 3 May 10 23:56:08.687: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:08.688: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:08.688: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:08.688: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:08.688: INFO: May 10 23:56:08.688: INFO: StatefulSet ss has not reached scale 0, at 3 May 10 23:56:09.807: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:09.807: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:09.807: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:09.807: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:09.807: INFO: May 10 23:56:09.807: INFO: StatefulSet ss has not reached scale 0, at 3 May 10 23:56:10.812: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:10.812: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:10.812: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:10.812: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:10.812: INFO: May 10 23:56:10.812: INFO: StatefulSet ss has not reached scale 0, at 3 May 10 23:56:11.818: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:11.818: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:11.818: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:11.818: INFO: May 10 23:56:11.818: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 23:56:12.822: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:12.822: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:12.822: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:12.822: INFO: May 10 23:56:12.822: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 23:56:13.828: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:13.828: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:13.828: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:13.828: INFO: May 10 23:56:13.828: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 23:56:14.832: INFO: POD NODE PHASE GRACE CONDITIONS May 10 23:56:14.832: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:15 +0000 UTC }] May 10 23:56:14.832: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 23:55:35 +0000 UTC }] May 10 23:56:14.832: INFO: May 10 23:56:14.832: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 23:56:15.836: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.539258377s May 10 23:56:16.841: INFO: Verifying statefulset ss doesn't scale past 0 for another 535.324804ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6312 May 10 23:56:17.845: INFO: Scaling statefulset ss to 0 May 10 23:56:17.855: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 10 23:56:17.858: INFO: Deleting all statefulset in ns statefulset-6312 May 10 23:56:17.860: INFO: Scaling statefulset ss to 0 May 10 23:56:17.869: INFO: Waiting for statefulset status.replicas updated to 0 May 10 23:56:17.871: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:56:17.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6312" for this suite. • [SLOW TEST:62.583 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":40,"skipped":664,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:56:17.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7ba6e4e0-0e60-4df1-97e0-735072425534 STEP: Creating a pod to test consume secrets May 10 23:56:18.015: INFO: Waiting up to 5m0s for pod "pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173" in namespace "secrets-7455" to be "Succeeded or Failed" May 10 23:56:18.034: INFO: Pod "pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173": Phase="Pending", Reason="", readiness=false. Elapsed: 19.015721ms May 10 23:56:20.038: INFO: Pod "pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023374294s May 10 23:56:22.043: INFO: Pod "pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028418655s STEP: Saw pod success May 10 23:56:22.043: INFO: Pod "pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173" satisfied condition "Succeeded or Failed" May 10 23:56:22.046: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173 container secret-volume-test: STEP: delete the pod May 10 23:56:22.093: INFO: Waiting for pod pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173 to disappear May 10 23:56:22.109: INFO: Pod pod-secrets-19d46e62-154b-4657-89f8-6af5d517a173 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:56:22.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7455" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":669,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:56:22.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 10 23:56:22.221: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 10 23:56:22.257: INFO: Waiting for terminating namespaces to be deleted... May 10 23:56:22.259: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 10 23:56:22.265: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 10 23:56:22.265: INFO: Container kindnet-cni ready: true, restart count 0 May 10 23:56:22.265: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 10 23:56:22.265: INFO: Container kube-proxy ready: true, restart count 0 May 10 23:56:22.265: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 10 23:56:22.270: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 10 23:56:22.270: INFO: Container kindnet-cni ready: true, restart count 0 May 10 23:56:22.270: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 10 23:56:22.270: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4753d6d4-15a6-4439-9ceb-2f6eed14c78c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4753d6d4-15a6-4439-9ceb-2f6eed14c78c off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4753d6d4-15a6-4439-9ceb-2f6eed14c78c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:56:38.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4622" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.363 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":42,"skipped":671,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:56:38.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-64305b85-c4e6-49d1-815f-6021b40fc1cb STEP: Creating the pod STEP: Updating configmap configmap-test-upd-64305b85-c4e6-49d1-815f-6021b40fc1cb STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:57:58.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7439" for this suite. • [SLOW TEST:80.499 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:57:58.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 10 23:57:59.128: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:57:59.148: INFO: Number of nodes with available pods: 0 May 10 23:57:59.148: INFO: Node latest-worker is running more than one daemon pod May 10 23:58:00.153: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:00.157: INFO: Number of nodes with available pods: 0 May 10 23:58:00.157: INFO: Node latest-worker is running more than one daemon pod May 10 23:58:01.153: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:01.157: INFO: Number of nodes with available pods: 0 May 10 23:58:01.157: INFO: Node latest-worker is running more than one daemon pod May 10 23:58:02.161: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:02.164: INFO: Number of nodes with available pods: 0 May 10 23:58:02.164: INFO: Node latest-worker is running more than one daemon pod May 10 23:58:03.154: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:03.158: INFO: Number of nodes with available pods: 0 May 10 23:58:03.158: INFO: Node latest-worker is running more than one daemon pod May 10 23:58:04.153: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:04.156: INFO: Number of nodes with available pods: 2 May 10 23:58:04.157: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 10 23:58:04.188: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:04.192: INFO: Number of nodes with available pods: 1 May 10 23:58:04.192: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:05.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:05.201: INFO: Number of nodes with available pods: 1 May 10 23:58:05.201: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:06.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:06.200: INFO: Number of nodes with available pods: 1 May 10 23:58:06.200: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:07.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:07.202: INFO: Number of nodes with available pods: 1 May 10 23:58:07.202: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:08.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:08.201: INFO: Number of nodes with available pods: 1 May 10 23:58:08.201: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:09.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:09.198: INFO: Number of nodes with available pods: 1 May 10 23:58:09.198: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:10.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:10.199: INFO: Number of nodes with available pods: 1 May 10 23:58:10.200: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:11.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:11.203: INFO: Number of nodes with available pods: 1 May 10 23:58:11.203: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:12.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:12.202: INFO: Number of nodes with available pods: 1 May 10 23:58:12.202: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:13.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:13.198: INFO: Number of nodes with available pods: 1 May 10 23:58:13.198: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:14.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:14.202: INFO: Number of nodes with available pods: 1 May 10 23:58:14.202: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:15.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:15.202: INFO: Number of nodes with available pods: 1 May 10 23:58:15.202: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:16.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:16.201: INFO: Number of nodes with available pods: 1 May 10 23:58:16.201: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:17.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:17.200: INFO: Number of nodes with available pods: 1 May 10 23:58:17.200: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:18.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:18.201: INFO: Number of nodes with available pods: 1 May 10 23:58:18.201: INFO: Node latest-worker2 is running more than one daemon pod May 10 23:58:19.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 23:58:19.201: INFO: Number of nodes with available pods: 2 May 10 23:58:19.201: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1722, will wait for the garbage collector to delete the pods May 10 23:58:19.269: INFO: Deleting DaemonSet.extensions daemon-set took: 7.024179ms May 10 23:58:19.669: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.245063ms May 10 23:58:25.286: INFO: Number of nodes with available pods: 0 May 10 23:58:25.286: INFO: Number of running nodes: 0, number of available pods: 0 May 10 23:58:25.289: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1722/daemonsets","resourceVersion":"3207356"},"items":null} May 10 23:58:25.294: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1722/pods","resourceVersion":"3207356"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:58:25.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1722" for this suite. • [SLOW TEST:26.336 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":44,"skipped":709,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:58:25.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2882 STEP: creating service affinity-nodeport in namespace services-2882 STEP: creating replication controller affinity-nodeport in namespace services-2882 I0510 23:58:25.462754 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2882, replica count: 3 I0510 23:58:28.513346 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 23:58:31.513583 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 23:58:31.595: INFO: Creating new exec pod May 10 23:58:36.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2882 execpod-affinity74snc -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 10 23:58:36.848: INFO: stderr: "I0510 23:58:36.763280 574 log.go:172] (0xc000abcf20) (0xc000c80280) Create stream\nI0510 23:58:36.763361 574 log.go:172] (0xc000abcf20) (0xc000c80280) Stream added, broadcasting: 1\nI0510 23:58:36.768327 574 log.go:172] (0xc000abcf20) Reply frame received for 1\nI0510 23:58:36.768368 574 log.go:172] (0xc000abcf20) (0xc00072eaa0) Create stream\nI0510 23:58:36.768380 574 log.go:172] (0xc000abcf20) (0xc00072eaa0) Stream added, broadcasting: 3\nI0510 23:58:36.769745 574 log.go:172] (0xc000abcf20) Reply frame received for 3\nI0510 23:58:36.769791 574 log.go:172] (0xc000abcf20) (0xc000726d20) Create stream\nI0510 23:58:36.769810 574 log.go:172] (0xc000abcf20) (0xc000726d20) Stream added, broadcasting: 5\nI0510 23:58:36.770758 574 log.go:172] (0xc000abcf20) Reply frame received for 5\nI0510 23:58:36.842095 574 log.go:172] (0xc000abcf20) Data frame received for 5\nI0510 23:58:36.842186 574 log.go:172] (0xc000726d20) (5) Data frame handling\nI0510 23:58:36.842236 574 log.go:172] (0xc000726d20) (5) Data frame sent\nI0510 23:58:36.842248 574 log.go:172] (0xc000abcf20) Data frame received for 5\nI0510 23:58:36.842255 574 log.go:172] (0xc000726d20) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0510 23:58:36.842298 574 log.go:172] (0xc000726d20) (5) Data frame sent\nI0510 23:58:36.842313 574 log.go:172] (0xc000abcf20) Data frame received for 5\nI0510 23:58:36.842320 574 log.go:172] (0xc000726d20) (5) Data frame handling\nI0510 23:58:36.842400 574 log.go:172] (0xc000abcf20) Data frame received for 3\nI0510 23:58:36.842415 574 log.go:172] (0xc00072eaa0) (3) Data frame handling\nI0510 23:58:36.843840 574 log.go:172] (0xc000abcf20) Data frame received for 1\nI0510 23:58:36.843875 574 log.go:172] (0xc000c80280) (1) Data frame handling\nI0510 23:58:36.843890 574 log.go:172] (0xc000c80280) (1) Data frame sent\nI0510 23:58:36.843909 574 log.go:172] (0xc000abcf20) (0xc000c80280) Stream removed, broadcasting: 1\nI0510 23:58:36.843928 574 log.go:172] (0xc000abcf20) Go away received\nI0510 23:58:36.844278 574 log.go:172] (0xc000abcf20) (0xc000c80280) Stream removed, broadcasting: 1\nI0510 23:58:36.844294 574 log.go:172] (0xc000abcf20) (0xc00072eaa0) Stream removed, broadcasting: 3\nI0510 23:58:36.844308 574 log.go:172] (0xc000abcf20) (0xc000726d20) Stream removed, broadcasting: 5\n" May 10 23:58:36.849: INFO: stdout: "" May 10 23:58:36.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2882 execpod-affinity74snc -- /bin/sh -x -c nc -zv -t -w 2 10.109.215.9 80' May 10 23:58:37.049: INFO: stderr: "I0510 23:58:36.983779 597 log.go:172] (0xc000a8b1e0) (0xc000740f00) Create stream\nI0510 23:58:36.983823 597 log.go:172] (0xc000a8b1e0) (0xc000740f00) Stream added, broadcasting: 1\nI0510 23:58:36.985694 597 log.go:172] (0xc000a8b1e0) Reply frame received for 1\nI0510 23:58:36.985740 597 log.go:172] (0xc000a8b1e0) (0xc0008565a0) Create stream\nI0510 23:58:36.985755 597 log.go:172] (0xc000a8b1e0) (0xc0008565a0) Stream added, broadcasting: 3\nI0510 23:58:36.986383 597 log.go:172] (0xc000a8b1e0) Reply frame received for 3\nI0510 23:58:36.986406 597 log.go:172] (0xc000a8b1e0) (0xc0007414a0) Create stream\nI0510 23:58:36.986413 597 log.go:172] (0xc000a8b1e0) (0xc0007414a0) Stream added, broadcasting: 5\nI0510 23:58:36.987218 597 log.go:172] (0xc000a8b1e0) Reply frame received for 5\nI0510 23:58:37.043532 597 log.go:172] (0xc000a8b1e0) Data frame received for 5\nI0510 23:58:37.043573 597 log.go:172] (0xc0007414a0) (5) Data frame handling\nI0510 23:58:37.043584 597 log.go:172] (0xc0007414a0) (5) Data frame sent\nI0510 23:58:37.043594 597 log.go:172] (0xc000a8b1e0) Data frame received for 5\nI0510 23:58:37.043605 597 log.go:172] (0xc0007414a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.215.9 80\nConnection to 10.109.215.9 80 port [tcp/http] succeeded!\nI0510 23:58:37.043631 597 log.go:172] (0xc000a8b1e0) Data frame received for 3\nI0510 23:58:37.043643 597 log.go:172] (0xc0008565a0) (3) Data frame handling\nI0510 23:58:37.045026 597 log.go:172] (0xc000a8b1e0) Data frame received for 1\nI0510 23:58:37.045046 597 log.go:172] (0xc000740f00) (1) Data frame handling\nI0510 23:58:37.045073 597 log.go:172] (0xc000740f00) (1) Data frame sent\nI0510 23:58:37.045085 597 log.go:172] (0xc000a8b1e0) (0xc000740f00) Stream removed, broadcasting: 1\nI0510 23:58:37.045417 597 log.go:172] (0xc000a8b1e0) (0xc000740f00) Stream removed, broadcasting: 1\nI0510 23:58:37.045434 597 log.go:172] (0xc000a8b1e0) (0xc0008565a0) Stream removed, broadcasting: 3\nI0510 23:58:37.045554 597 log.go:172] (0xc000a8b1e0) (0xc0007414a0) Stream removed, broadcasting: 5\nI0510 23:58:37.045615 597 log.go:172] (0xc000a8b1e0) Go away received\n" May 10 23:58:37.050: INFO: stdout: "" May 10 23:58:37.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2882 execpod-affinity74snc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32315' May 10 23:58:37.262: INFO: stderr: "I0510 23:58:37.188875 617 log.go:172] (0xc0009b2a50) (0xc000b8e460) Create stream\nI0510 23:58:37.188939 617 log.go:172] (0xc0009b2a50) (0xc000b8e460) Stream added, broadcasting: 1\nI0510 23:58:37.191527 617 log.go:172] (0xc0009b2a50) Reply frame received for 1\nI0510 23:58:37.191560 617 log.go:172] (0xc0009b2a50) (0xc0006d0f00) Create stream\nI0510 23:58:37.191570 617 log.go:172] (0xc0009b2a50) (0xc0006d0f00) Stream added, broadcasting: 3\nI0510 23:58:37.192609 617 log.go:172] (0xc0009b2a50) Reply frame received for 3\nI0510 23:58:37.192668 617 log.go:172] (0xc0009b2a50) (0xc0006a0d20) Create stream\nI0510 23:58:37.192684 617 log.go:172] (0xc0009b2a50) (0xc0006a0d20) Stream added, broadcasting: 5\nI0510 23:58:37.194399 617 log.go:172] (0xc0009b2a50) Reply frame received for 5\nI0510 23:58:37.256924 617 log.go:172] (0xc0009b2a50) Data frame received for 5\nI0510 23:58:37.256946 617 log.go:172] (0xc0006a0d20) (5) Data frame handling\nI0510 23:58:37.256957 617 log.go:172] (0xc0006a0d20) (5) Data frame sent\nI0510 23:58:37.256965 617 log.go:172] (0xc0009b2a50) Data frame received for 5\nI0510 23:58:37.256972 617 log.go:172] (0xc0006a0d20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32315\nConnection to 172.17.0.13 32315 port [tcp/32315] succeeded!\nI0510 23:58:37.257249 617 log.go:172] (0xc0009b2a50) Data frame received for 3\nI0510 23:58:37.257269 617 log.go:172] (0xc0006d0f00) (3) Data frame handling\nI0510 23:58:37.258317 617 log.go:172] (0xc0009b2a50) Data frame received for 1\nI0510 23:58:37.258346 617 log.go:172] (0xc000b8e460) (1) Data frame handling\nI0510 23:58:37.258361 617 log.go:172] (0xc000b8e460) (1) Data frame sent\nI0510 23:58:37.258376 617 log.go:172] (0xc0009b2a50) (0xc000b8e460) Stream removed, broadcasting: 1\nI0510 23:58:37.258390 617 log.go:172] (0xc0009b2a50) Go away received\nI0510 23:58:37.258594 617 log.go:172] (0xc0009b2a50) (0xc000b8e460) Stream removed, broadcasting: 1\nI0510 23:58:37.258606 617 log.go:172] (0xc0009b2a50) (0xc0006d0f00) Stream removed, broadcasting: 3\nI0510 23:58:37.258612 617 log.go:172] (0xc0009b2a50) (0xc0006a0d20) Stream removed, broadcasting: 5\n" May 10 23:58:37.262: INFO: stdout: "" May 10 23:58:37.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2882 execpod-affinity74snc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32315' May 10 23:58:37.517: INFO: stderr: "I0510 23:58:37.382184 639 log.go:172] (0xc0009c4000) (0xc0005001e0) Create stream\nI0510 23:58:37.382257 639 log.go:172] (0xc0009c4000) (0xc0005001e0) Stream added, broadcasting: 1\nI0510 23:58:37.384365 639 log.go:172] (0xc0009c4000) Reply frame received for 1\nI0510 23:58:37.384404 639 log.go:172] (0xc0009c4000) (0xc0004d0dc0) Create stream\nI0510 23:58:37.384418 639 log.go:172] (0xc0009c4000) (0xc0004d0dc0) Stream added, broadcasting: 3\nI0510 23:58:37.385107 639 log.go:172] (0xc0009c4000) Reply frame received for 3\nI0510 23:58:37.385269 639 log.go:172] (0xc0009c4000) (0xc000500960) Create stream\nI0510 23:58:37.385277 639 log.go:172] (0xc0009c4000) (0xc000500960) Stream added, broadcasting: 5\nI0510 23:58:37.386032 639 log.go:172] (0xc0009c4000) Reply frame received for 5\nI0510 23:58:37.512287 639 log.go:172] (0xc0009c4000) Data frame received for 5\nI0510 23:58:37.512320 639 log.go:172] (0xc000500960) (5) Data frame handling\nI0510 23:58:37.512330 639 log.go:172] (0xc000500960) (5) Data frame sent\nI0510 23:58:37.512338 639 log.go:172] (0xc0009c4000) Data frame received for 5\nI0510 23:58:37.512344 639 log.go:172] (0xc000500960) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32315\nConnection to 172.17.0.12 32315 port [tcp/32315] succeeded!\nI0510 23:58:37.512355 639 log.go:172] (0xc0009c4000) Data frame received for 3\nI0510 23:58:37.512417 639 log.go:172] (0xc0004d0dc0) (3) Data frame handling\nI0510 23:58:37.513260 639 log.go:172] (0xc0009c4000) Data frame received for 1\nI0510 23:58:37.513275 639 log.go:172] (0xc0005001e0) (1) Data frame handling\nI0510 23:58:37.513286 639 log.go:172] (0xc0005001e0) (1) Data frame sent\nI0510 23:58:37.513411 639 log.go:172] (0xc0009c4000) (0xc0005001e0) Stream removed, broadcasting: 1\nI0510 23:58:37.513443 639 log.go:172] (0xc0009c4000) Go away received\nI0510 23:58:37.513622 639 log.go:172] (0xc0009c4000) (0xc0005001e0) Stream removed, broadcasting: 1\nI0510 23:58:37.513634 639 log.go:172] (0xc0009c4000) (0xc0004d0dc0) Stream removed, broadcasting: 3\nI0510 23:58:37.513640 639 log.go:172] (0xc0009c4000) (0xc000500960) Stream removed, broadcasting: 5\n" May 10 23:58:37.517: INFO: stdout: "" May 10 23:58:37.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2882 execpod-affinity74snc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32315/ ; done' May 10 23:58:37.776: INFO: stderr: "I0510 23:58:37.622768 659 log.go:172] (0xc000a753f0) (0xc000c3e1e0) Create stream\nI0510 23:58:37.622808 659 log.go:172] (0xc000a753f0) (0xc000c3e1e0) Stream added, broadcasting: 1\nI0510 23:58:37.627967 659 log.go:172] (0xc000a753f0) Reply frame received for 1\nI0510 23:58:37.628012 659 log.go:172] (0xc000a753f0) (0xc0007405a0) Create stream\nI0510 23:58:37.628024 659 log.go:172] (0xc000a753f0) (0xc0007405a0) Stream added, broadcasting: 3\nI0510 23:58:37.628991 659 log.go:172] (0xc000a753f0) Reply frame received for 3\nI0510 23:58:37.629032 659 log.go:172] (0xc000a753f0) (0xc000704500) Create stream\nI0510 23:58:37.629045 659 log.go:172] (0xc000a753f0) (0xc000704500) Stream added, broadcasting: 5\nI0510 23:58:37.630103 659 log.go:172] (0xc000a753f0) Reply frame received for 5\nI0510 23:58:37.689696 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.689726 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.689734 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.689773 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.689818 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.689841 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.698104 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.698128 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.698143 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.698527 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.698545 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.698558 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.698570 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.698578 659 log.go:172] (0xc000704500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.698590 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.698612 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.698622 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.698639 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.702215 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.702232 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.702249 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.702516 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.702535 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.702554 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.702592 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.702608 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.702617 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.708129 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.708145 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.708159 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.708467 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.708483 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.708490 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.708499 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.708504 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.708509 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.712402 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.712418 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.712433 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.712740 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.712757 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.712767 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.712779 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.712790 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.712796 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.716460 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.716482 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.716499 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.716918 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.716940 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.716951 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.716985 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.716995 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.717004 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.717012 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.717019 659 log.go:172] (0xc000704500) (5) Data frame handling\n+ echo\nI0510 23:58:37.717034 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.717042 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.717053 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.717063 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.717075 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.717083 659 log.go:172] (0xc000704500) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.717104 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.722893 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.722951 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.722973 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.723300 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.723331 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.723344 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.723361 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.723377 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.723389 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.728138 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.728155 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.728191 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.728594 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.728620 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.728627 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.728651 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.728681 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.728704 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.734283 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.734312 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.734340 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.734723 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.734736 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.734743 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.734754 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.734759 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.734765 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.738795 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.738812 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.738826 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.739225 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.739239 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.739250 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.739260 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.739271 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.739277 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.743072 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.743090 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.743106 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.743690 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.743714 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.743723 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.743729 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.743740 659 log.go:172] (0xc000704500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.743753 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.743762 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.743772 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.743782 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.747491 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.747506 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.747518 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.747865 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.747875 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.747881 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.747889 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.747894 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.747898 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.747902 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.747907 659 log.go:172] (0xc000704500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.747927 659 log.go:172] (0xc000704500) (5) Data frame sent\nI0510 23:58:37.751903 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.751917 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.751923 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.752332 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.752342 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.752347 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.752371 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.752402 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.752422 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.755661 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.755676 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.755687 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.755981 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.756002 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.756010 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.756021 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.756025 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.756031 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.759735 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.759755 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.759774 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.760146 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.760156 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.760162 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.760169 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.760174 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.760181 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.764466 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.764480 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.764491 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.765554 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.765564 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.765570 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.765580 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.765592 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.765600 659 log.go:172] (0xc000704500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32315/\nI0510 23:58:37.769822 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.769840 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.769858 659 log.go:172] (0xc0007405a0) (3) Data frame sent\nI0510 23:58:37.770773 659 log.go:172] (0xc000a753f0) Data frame received for 5\nI0510 23:58:37.770810 659 log.go:172] (0xc000704500) (5) Data frame handling\nI0510 23:58:37.770933 659 log.go:172] (0xc000a753f0) Data frame received for 3\nI0510 23:58:37.770952 659 log.go:172] (0xc0007405a0) (3) Data frame handling\nI0510 23:58:37.772251 659 log.go:172] (0xc000a753f0) Data frame received for 1\nI0510 23:58:37.772264 659 log.go:172] (0xc000c3e1e0) (1) Data frame handling\nI0510 23:58:37.772279 659 log.go:172] (0xc000c3e1e0) (1) Data frame sent\nI0510 23:58:37.772322 659 log.go:172] (0xc000a753f0) (0xc000c3e1e0) Stream removed, broadcasting: 1\nI0510 23:58:37.772344 659 log.go:172] (0xc000a753f0) Go away received\nI0510 23:58:37.772722 659 log.go:172] (0xc000a753f0) (0xc000c3e1e0) Stream removed, broadcasting: 1\nI0510 23:58:37.772751 659 log.go:172] (0xc000a753f0) (0xc0007405a0) Stream removed, broadcasting: 3\nI0510 23:58:37.772764 659 log.go:172] (0xc000a753f0) (0xc000704500) Stream removed, broadcasting: 5\n" May 10 23:58:37.777: INFO: stdout: "\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv\naffinity-nodeport-478tv" May 10 23:58:37.777: INFO: Received response from host: May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Received response from host: affinity-nodeport-478tv May 10 23:58:37.777: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-2882, will wait for the garbage collector to delete the pods May 10 23:58:37.997: INFO: Deleting ReplicationController affinity-nodeport took: 100.83119ms May 10 23:58:38.297: INFO: Terminating ReplicationController affinity-nodeport pods took: 300.256357ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:58:45.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2882" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.787 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":45,"skipped":723,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:58:45.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:59:01.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1307" for this suite. • [SLOW TEST:16.134 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":46,"skipped":739,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:59:01.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 10 23:59:01.331: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:59:02.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9228" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":47,"skipped":760,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:59:02.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 10 23:59:03.133: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 10 23:59:05.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751943, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751943, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751943, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724751943, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 23:59:08.574: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 10 23:59:08.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:59:09.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3151" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.471 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":48,"skipped":768,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:59:09.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4689 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4689 I0510 23:59:09.997732 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4689, replica count: 2 I0510 23:59:13.048161 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 23:59:16.048399 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 23:59:16.048: INFO: Creating new exec pod May 10 23:59:21.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4689 execpodclcqq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 10 23:59:21.314: INFO: stderr: "I0510 23:59:21.201968 683 log.go:172] (0xc0009cd6b0) (0xc000afa320) Create stream\nI0510 23:59:21.202027 683 log.go:172] (0xc0009cd6b0) (0xc000afa320) Stream added, broadcasting: 1\nI0510 23:59:21.207200 683 log.go:172] (0xc0009cd6b0) Reply frame received for 1\nI0510 23:59:21.207254 683 log.go:172] (0xc0009cd6b0) (0xc00052cd20) Create stream\nI0510 23:59:21.207268 683 log.go:172] (0xc0009cd6b0) (0xc00052cd20) Stream added, broadcasting: 3\nI0510 23:59:21.208395 683 log.go:172] (0xc0009cd6b0) Reply frame received for 3\nI0510 23:59:21.208420 683 log.go:172] (0xc0009cd6b0) (0xc00052a460) Create stream\nI0510 23:59:21.208435 683 log.go:172] (0xc0009cd6b0) (0xc00052a460) Stream added, broadcasting: 5\nI0510 23:59:21.209889 683 log.go:172] (0xc0009cd6b0) Reply frame received for 5\nI0510 23:59:21.306419 683 log.go:172] (0xc0009cd6b0) Data frame received for 5\nI0510 23:59:21.306467 683 log.go:172] (0xc00052a460) (5) Data frame handling\nI0510 23:59:21.306516 683 log.go:172] (0xc00052a460) (5) Data frame sent\nI0510 23:59:21.306580 683 log.go:172] (0xc0009cd6b0) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nI0510 23:59:21.306616 683 log.go:172] (0xc00052a460) (5) Data frame handling\nI0510 23:59:21.306655 683 log.go:172] (0xc00052a460) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0510 23:59:21.306848 683 log.go:172] (0xc0009cd6b0) Data frame received for 5\nI0510 23:59:21.306864 683 log.go:172] (0xc00052a460) (5) Data frame handling\nI0510 23:59:21.307549 683 log.go:172] (0xc0009cd6b0) Data frame received for 3\nI0510 23:59:21.307586 683 log.go:172] (0xc00052cd20) (3) Data frame handling\nI0510 23:59:21.309740 683 log.go:172] (0xc0009cd6b0) Data frame received for 1\nI0510 23:59:21.309758 683 log.go:172] (0xc000afa320) (1) Data frame handling\nI0510 23:59:21.309767 683 log.go:172] (0xc000afa320) (1) Data frame sent\nI0510 23:59:21.309780 683 log.go:172] (0xc0009cd6b0) (0xc000afa320) Stream removed, broadcasting: 1\nI0510 23:59:21.309794 683 log.go:172] (0xc0009cd6b0) Go away received\nI0510 23:59:21.310137 683 log.go:172] (0xc0009cd6b0) (0xc000afa320) Stream removed, broadcasting: 1\nI0510 23:59:21.310160 683 log.go:172] (0xc0009cd6b0) (0xc00052cd20) Stream removed, broadcasting: 3\nI0510 23:59:21.310175 683 log.go:172] (0xc0009cd6b0) (0xc00052a460) Stream removed, broadcasting: 5\n" May 10 23:59:21.314: INFO: stdout: "" May 10 23:59:21.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4689 execpodclcqq -- /bin/sh -x -c nc -zv -t -w 2 10.106.134.128 80' May 10 23:59:21.543: INFO: stderr: "I0510 23:59:21.459505 704 log.go:172] (0xc0009ec000) (0xc00080eaa0) Create stream\nI0510 23:59:21.459562 704 log.go:172] (0xc0009ec000) (0xc00080eaa0) Stream added, broadcasting: 1\nI0510 23:59:21.461075 704 log.go:172] (0xc0009ec000) Reply frame received for 1\nI0510 23:59:21.461227 704 log.go:172] (0xc0009ec000) (0xc000780280) Create stream\nI0510 23:59:21.461240 704 log.go:172] (0xc0009ec000) (0xc000780280) Stream added, broadcasting: 3\nI0510 23:59:21.462131 704 log.go:172] (0xc0009ec000) Reply frame received for 3\nI0510 23:59:21.462162 704 log.go:172] (0xc0009ec000) (0xc000781220) Create stream\nI0510 23:59:21.462171 704 log.go:172] (0xc0009ec000) (0xc000781220) Stream added, broadcasting: 5\nI0510 23:59:21.462903 704 log.go:172] (0xc0009ec000) Reply frame received for 5\nI0510 23:59:21.537428 704 log.go:172] (0xc0009ec000) Data frame received for 3\nI0510 23:59:21.537491 704 log.go:172] (0xc000780280) (3) Data frame handling\nI0510 23:59:21.537546 704 log.go:172] (0xc0009ec000) Data frame received for 5\nI0510 23:59:21.537568 704 log.go:172] (0xc000781220) (5) Data frame handling\nI0510 23:59:21.537589 704 log.go:172] (0xc000781220) (5) Data frame sent\nI0510 23:59:21.537607 704 log.go:172] (0xc0009ec000) Data frame received for 5\nI0510 23:59:21.537628 704 log.go:172] (0xc000781220) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.134.128 80\nConnection to 10.106.134.128 80 port [tcp/http] succeeded!\nI0510 23:59:21.538986 704 log.go:172] (0xc0009ec000) Data frame received for 1\nI0510 23:59:21.539006 704 log.go:172] (0xc00080eaa0) (1) Data frame handling\nI0510 23:59:21.539015 704 log.go:172] (0xc00080eaa0) (1) Data frame sent\nI0510 23:59:21.539032 704 log.go:172] (0xc0009ec000) (0xc00080eaa0) Stream removed, broadcasting: 1\nI0510 23:59:21.539094 704 log.go:172] (0xc0009ec000) Go away received\nI0510 23:59:21.539354 704 log.go:172] (0xc0009ec000) (0xc00080eaa0) Stream removed, broadcasting: 1\nI0510 23:59:21.539373 704 log.go:172] (0xc0009ec000) (0xc000780280) Stream removed, broadcasting: 3\nI0510 23:59:21.539380 704 log.go:172] (0xc0009ec000) (0xc000781220) Stream removed, broadcasting: 5\n" May 10 23:59:21.543: INFO: stdout: "" May 10 23:59:21.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4689 execpodclcqq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30882' May 10 23:59:21.755: INFO: stderr: "I0510 23:59:21.685575 724 log.go:172] (0xc000a72000) (0xc000362d20) Create stream\nI0510 23:59:21.685648 724 log.go:172] (0xc000a72000) (0xc000362d20) Stream added, broadcasting: 1\nI0510 23:59:21.688618 724 log.go:172] (0xc000a72000) Reply frame received for 1\nI0510 23:59:21.688652 724 log.go:172] (0xc000a72000) (0xc0000efb80) Create stream\nI0510 23:59:21.688662 724 log.go:172] (0xc000a72000) (0xc0000efb80) Stream added, broadcasting: 3\nI0510 23:59:21.689937 724 log.go:172] (0xc000a72000) Reply frame received for 3\nI0510 23:59:21.689985 724 log.go:172] (0xc000a72000) (0xc000149ea0) Create stream\nI0510 23:59:21.690008 724 log.go:172] (0xc000a72000) (0xc000149ea0) Stream added, broadcasting: 5\nI0510 23:59:21.690970 724 log.go:172] (0xc000a72000) Reply frame received for 5\nI0510 23:59:21.749623 724 log.go:172] (0xc000a72000) Data frame received for 5\nI0510 23:59:21.749660 724 log.go:172] (0xc000149ea0) (5) Data frame handling\nI0510 23:59:21.749672 724 log.go:172] (0xc000149ea0) (5) Data frame sent\nI0510 23:59:21.749681 724 log.go:172] (0xc000a72000) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 30882\nConnection to 172.17.0.13 30882 port [tcp/30882] succeeded!\nI0510 23:59:21.749688 724 log.go:172] (0xc000149ea0) (5) Data frame handling\nI0510 23:59:21.749723 724 log.go:172] (0xc000a72000) Data frame received for 3\nI0510 23:59:21.749738 724 log.go:172] (0xc0000efb80) (3) Data frame handling\nI0510 23:59:21.751013 724 log.go:172] (0xc000a72000) Data frame received for 1\nI0510 23:59:21.751037 724 log.go:172] (0xc000362d20) (1) Data frame handling\nI0510 23:59:21.751051 724 log.go:172] (0xc000362d20) (1) Data frame sent\nI0510 23:59:21.751062 724 log.go:172] (0xc000a72000) (0xc000362d20) Stream removed, broadcasting: 1\nI0510 23:59:21.751199 724 log.go:172] (0xc000a72000) Go away received\nI0510 23:59:21.751292 724 log.go:172] (0xc000a72000) (0xc000362d20) Stream removed, broadcasting: 1\nI0510 23:59:21.751305 724 log.go:172] (0xc000a72000) (0xc0000efb80) Stream removed, broadcasting: 3\nI0510 23:59:21.751311 724 log.go:172] (0xc000a72000) (0xc000149ea0) Stream removed, broadcasting: 5\n" May 10 23:59:21.755: INFO: stdout: "" May 10 23:59:21.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4689 execpodclcqq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30882' May 10 23:59:21.965: INFO: stderr: "I0510 23:59:21.884947 744 log.go:172] (0xc00099b1e0) (0xc000a98320) Create stream\nI0510 23:59:21.885008 744 log.go:172] (0xc00099b1e0) (0xc000a98320) Stream added, broadcasting: 1\nI0510 23:59:21.889874 744 log.go:172] (0xc00099b1e0) Reply frame received for 1\nI0510 23:59:21.889909 744 log.go:172] (0xc00099b1e0) (0xc000270640) Create stream\nI0510 23:59:21.889921 744 log.go:172] (0xc00099b1e0) (0xc000270640) Stream added, broadcasting: 3\nI0510 23:59:21.890859 744 log.go:172] (0xc00099b1e0) Reply frame received for 3\nI0510 23:59:21.890904 744 log.go:172] (0xc00099b1e0) (0xc00050e5a0) Create stream\nI0510 23:59:21.890920 744 log.go:172] (0xc00099b1e0) (0xc00050e5a0) Stream added, broadcasting: 5\nI0510 23:59:21.891801 744 log.go:172] (0xc00099b1e0) Reply frame received for 5\nI0510 23:59:21.957398 744 log.go:172] (0xc00099b1e0) Data frame received for 3\nI0510 23:59:21.957429 744 log.go:172] (0xc000270640) (3) Data frame handling\nI0510 23:59:21.957537 744 log.go:172] (0xc00099b1e0) Data frame received for 5\nI0510 23:59:21.957551 744 log.go:172] (0xc00050e5a0) (5) Data frame handling\nI0510 23:59:21.957560 744 log.go:172] (0xc00050e5a0) (5) Data frame sent\nI0510 23:59:21.957568 744 log.go:172] (0xc00099b1e0) Data frame received for 5\nI0510 23:59:21.957574 744 log.go:172] (0xc00050e5a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30882\nConnection to 172.17.0.12 30882 port [tcp/30882] succeeded!\nI0510 23:59:21.959352 744 log.go:172] (0xc00099b1e0) Data frame received for 1\nI0510 23:59:21.959378 744 log.go:172] (0xc000a98320) (1) Data frame handling\nI0510 23:59:21.959394 744 log.go:172] (0xc000a98320) (1) Data frame sent\nI0510 23:59:21.959406 744 log.go:172] (0xc00099b1e0) (0xc000a98320) Stream removed, broadcasting: 1\nI0510 23:59:21.959771 744 log.go:172] (0xc00099b1e0) (0xc000a98320) Stream removed, broadcasting: 1\nI0510 23:59:21.959788 744 log.go:172] (0xc00099b1e0) (0xc000270640) Stream removed, broadcasting: 3\nI0510 23:59:21.959972 744 log.go:172] (0xc00099b1e0) (0xc00050e5a0) Stream removed, broadcasting: 5\n" May 10 23:59:21.965: INFO: stdout: "" May 10 23:59:21.965: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:59:22.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4689" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.240 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":49,"skipped":787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:59:22.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 10 23:59:22.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 10 23:59:22.230: INFO: stderr: "" May 10 23:59:22.230: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:59:22.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8572" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":50,"skipped":817,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:59:22.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e66e0f4b-c4c6-498b-9599-6c706a40cfcd STEP: Creating a pod to test consume configMaps May 10 23:59:22.333: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376" in namespace "projected-6613" to be "Succeeded or Failed" May 10 23:59:22.335: INFO: Pod "pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603623ms May 10 23:59:24.340: INFO: Pod "pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007093494s May 10 23:59:26.344: INFO: Pod "pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011490574s STEP: Saw pod success May 10 23:59:26.344: INFO: Pod "pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376" satisfied condition "Succeeded or Failed" May 10 23:59:26.347: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376 container projected-configmap-volume-test: STEP: delete the pod May 10 23:59:26.392: INFO: Waiting for pod pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376 to disappear May 10 23:59:26.402: INFO: Pod pod-projected-configmaps-dcffe601-0724-49af-b709-cc006181c376 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:59:26.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6613" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":51,"skipped":825,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:59:26.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 10 23:59:26.457: INFO: Waiting up to 5m0s for pod "pod-e8775a70-c484-4f5b-99c1-107aeb6eb667" in namespace "emptydir-360" to be "Succeeded or Failed" May 10 23:59:26.490: INFO: Pod "pod-e8775a70-c484-4f5b-99c1-107aeb6eb667": Phase="Pending", Reason="", readiness=false. Elapsed: 33.05709ms May 10 23:59:28.556: INFO: Pod "pod-e8775a70-c484-4f5b-99c1-107aeb6eb667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099401629s May 10 23:59:30.560: INFO: Pod "pod-e8775a70-c484-4f5b-99c1-107aeb6eb667": Phase="Running", Reason="", readiness=true. Elapsed: 4.103731497s May 10 23:59:32.565: INFO: Pod "pod-e8775a70-c484-4f5b-99c1-107aeb6eb667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108335836s STEP: Saw pod success May 10 23:59:32.565: INFO: Pod "pod-e8775a70-c484-4f5b-99c1-107aeb6eb667" satisfied condition "Succeeded or Failed" May 10 23:59:32.568: INFO: Trying to get logs from node latest-worker2 pod pod-e8775a70-c484-4f5b-99c1-107aeb6eb667 container test-container: STEP: delete the pod May 10 23:59:32.604: INFO: Waiting for pod pod-e8775a70-c484-4f5b-99c1-107aeb6eb667 to disappear May 10 23:59:32.616: INFO: Pod pod-e8775a70-c484-4f5b-99c1-107aeb6eb667 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 10 23:59:32.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-360" for this suite. • [SLOW TEST:6.214 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":833,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 10 23:59:32.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:00:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7625" for this suite. • [SLOW TEST:32.990 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":845,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:00:05.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:00:05.662: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 00:00:08.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7151 create -f -' May 11 00:00:14.377: INFO: stderr: "" May 11 00:00:14.378: INFO: stdout: "e2e-test-crd-publish-openapi-4766-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 00:00:14.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7151 delete e2e-test-crd-publish-openapi-4766-crds test-cr' May 11 00:00:14.489: INFO: stderr: "" May 11 00:00:14.489: INFO: stdout: "e2e-test-crd-publish-openapi-4766-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 11 00:00:14.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7151 apply -f -' May 11 00:00:14.784: INFO: stderr: "" May 11 00:00:14.784: INFO: stdout: "e2e-test-crd-publish-openapi-4766-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 00:00:14.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7151 delete e2e-test-crd-publish-openapi-4766-crds test-cr' May 11 00:00:14.913: INFO: stderr: "" May 11 00:00:14.913: INFO: stdout: "e2e-test-crd-publish-openapi-4766-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 11 00:00:14.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4766-crds' May 11 00:00:15.164: INFO: stderr: "" May 11 00:00:15.164: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4766-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:00:18.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7151" for this suite. • [SLOW TEST:12.503 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":54,"skipped":858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:00:18.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 11 00:00:19.130: INFO: Pod name wrapped-volume-race-28a696c4-9fcd-48b2-9e00-ffc3549fe8f5: Found 0 pods out of 5 May 11 00:00:24.148: INFO: Pod name wrapped-volume-race-28a696c4-9fcd-48b2-9e00-ffc3549fe8f5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-28a696c4-9fcd-48b2-9e00-ffc3549fe8f5 in namespace emptydir-wrapper-8373, will wait for the garbage collector to delete the pods May 11 00:00:36.579: INFO: Deleting ReplicationController wrapped-volume-race-28a696c4-9fcd-48b2-9e00-ffc3549fe8f5 took: 7.804751ms May 11 00:00:36.980: INFO: Terminating ReplicationController wrapped-volume-race-28a696c4-9fcd-48b2-9e00-ffc3549fe8f5 pods took: 400.343777ms STEP: Creating RC which spawns configmap-volume pods May 11 00:00:45.512: INFO: Pod name wrapped-volume-race-a4d8e1f4-ffad-4cd2-b619-ca02e22dac14: Found 0 pods out of 5 May 11 00:00:50.521: INFO: Pod name wrapped-volume-race-a4d8e1f4-ffad-4cd2-b619-ca02e22dac14: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a4d8e1f4-ffad-4cd2-b619-ca02e22dac14 in namespace emptydir-wrapper-8373, will wait for the garbage collector to delete the pods May 11 00:01:04.637: INFO: Deleting ReplicationController wrapped-volume-race-a4d8e1f4-ffad-4cd2-b619-ca02e22dac14 took: 15.92036ms May 11 00:01:04.937: INFO: Terminating ReplicationController wrapped-volume-race-a4d8e1f4-ffad-4cd2-b619-ca02e22dac14 pods took: 300.216269ms STEP: Creating RC which spawns configmap-volume pods May 11 00:01:15.029: INFO: Pod name wrapped-volume-race-fbee2427-ccd7-432a-b11e-74c54e61f1c5: Found 0 pods out of 5 May 11 00:01:20.038: INFO: Pod name wrapped-volume-race-fbee2427-ccd7-432a-b11e-74c54e61f1c5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fbee2427-ccd7-432a-b11e-74c54e61f1c5 in namespace emptydir-wrapper-8373, will wait for the garbage collector to delete the pods May 11 00:01:34.156: INFO: Deleting ReplicationController wrapped-volume-race-fbee2427-ccd7-432a-b11e-74c54e61f1c5 took: 7.929712ms May 11 00:01:34.556: INFO: Terminating ReplicationController wrapped-volume-race-fbee2427-ccd7-432a-b11e-74c54e61f1c5 pods took: 400.350503ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:01:46.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8373" for this suite. • [SLOW TEST:88.303 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":55,"skipped":914,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:01:46.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5905 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5905 STEP: creating replication controller externalsvc in namespace services-5905 I0511 00:01:46.749462 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5905, replica count: 2 I0511 00:01:49.799928 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:01:52.800192 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 11 00:01:52.893: INFO: Creating new exec pod May 11 00:01:56.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5905 execpod648dv -- /bin/sh -x -c nslookup nodeport-service' May 11 00:01:57.165: INFO: stderr: "I0511 00:01:57.070011 897 log.go:172] (0xc00003a9a0) (0xc0003668c0) Create stream\nI0511 00:01:57.070058 897 log.go:172] (0xc00003a9a0) (0xc0003668c0) Stream added, broadcasting: 1\nI0511 00:01:57.071673 897 log.go:172] (0xc00003a9a0) Reply frame received for 1\nI0511 00:01:57.071699 897 log.go:172] (0xc00003a9a0) (0xc000604a00) Create stream\nI0511 00:01:57.071707 897 log.go:172] (0xc00003a9a0) (0xc000604a00) Stream added, broadcasting: 3\nI0511 00:01:57.072329 897 log.go:172] (0xc00003a9a0) Reply frame received for 3\nI0511 00:01:57.072364 897 log.go:172] (0xc00003a9a0) (0xc000367040) Create stream\nI0511 00:01:57.072380 897 log.go:172] (0xc00003a9a0) (0xc000367040) Stream added, broadcasting: 5\nI0511 00:01:57.073067 897 log.go:172] (0xc00003a9a0) Reply frame received for 5\nI0511 00:01:57.145602 897 log.go:172] (0xc00003a9a0) Data frame received for 5\nI0511 00:01:57.145623 897 log.go:172] (0xc000367040) (5) Data frame handling\nI0511 00:01:57.145633 897 log.go:172] (0xc000367040) (5) Data frame sent\n+ nslookup nodeport-service\nI0511 00:01:57.155943 897 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0511 00:01:57.155974 897 log.go:172] (0xc000604a00) (3) Data frame handling\nI0511 00:01:57.156022 897 log.go:172] (0xc000604a00) (3) Data frame sent\nI0511 00:01:57.157379 897 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0511 00:01:57.157421 897 log.go:172] (0xc000604a00) (3) Data frame handling\nI0511 00:01:57.157456 897 log.go:172] (0xc000604a00) (3) Data frame sent\nI0511 00:01:57.157933 897 log.go:172] (0xc00003a9a0) Data frame received for 5\nI0511 00:01:57.157987 897 log.go:172] (0xc000367040) (5) Data frame handling\nI0511 00:01:57.158155 897 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0511 00:01:57.158178 897 log.go:172] (0xc000604a00) (3) Data frame handling\nI0511 00:01:57.160015 897 log.go:172] (0xc00003a9a0) Data frame received for 1\nI0511 00:01:57.160035 897 log.go:172] (0xc0003668c0) (1) Data frame handling\nI0511 00:01:57.160073 897 log.go:172] (0xc0003668c0) (1) Data frame sent\nI0511 00:01:57.160096 897 log.go:172] (0xc00003a9a0) (0xc0003668c0) Stream removed, broadcasting: 1\nI0511 00:01:57.160122 897 log.go:172] (0xc00003a9a0) Go away received\nI0511 00:01:57.160507 897 log.go:172] (0xc00003a9a0) (0xc0003668c0) Stream removed, broadcasting: 1\nI0511 00:01:57.160535 897 log.go:172] (0xc00003a9a0) (0xc000604a00) Stream removed, broadcasting: 3\nI0511 00:01:57.160553 897 log.go:172] (0xc00003a9a0) (0xc000367040) Stream removed, broadcasting: 5\n" May 11 00:01:57.165: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5905.svc.cluster.local\tcanonical name = externalsvc.services-5905.svc.cluster.local.\nName:\texternalsvc.services-5905.svc.cluster.local\nAddress: 10.96.53.150\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5905, will wait for the garbage collector to delete the pods May 11 00:01:57.226: INFO: Deleting ReplicationController externalsvc took: 6.877962ms May 11 00:01:57.526: INFO: Terminating ReplicationController externalsvc pods took: 300.22997ms May 11 00:02:04.982: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:05.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5905" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:18.621 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":56,"skipped":917,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:05.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:02:05.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 11 00:02:05.232: INFO: stderr: "" May 11 00:02:05.232: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:05.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5768" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":57,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:05.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:02:05.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2" in namespace "downward-api-3081" to be "Succeeded or Failed" May 11 00:02:05.335: INFO: Pod "downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.510878ms May 11 00:02:07.351: INFO: Pod "downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035520215s May 11 00:02:09.355: INFO: Pod "downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039372055s STEP: Saw pod success May 11 00:02:09.355: INFO: Pod "downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2" satisfied condition "Succeeded or Failed" May 11 00:02:09.358: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2 container client-container: STEP: delete the pod May 11 00:02:09.434: INFO: Waiting for pod downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2 to disappear May 11 00:02:09.450: INFO: Pod downwardapi-volume-ff2d7f23-29b1-4f4d-9c03-4a37d599b1e2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:09.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3081" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:09.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:02:09.509: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 11 00:02:09.586: INFO: Pod name sample-pod: Found 0 pods out of 1 May 11 00:02:14.591: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 00:02:14.591: INFO: Creating deployment "test-rolling-update-deployment" May 11 00:02:14.599: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 11 00:02:14.608: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 11 00:02:16.615: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 11 00:02:16.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752134, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752134, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:02:18.701: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 00:02:18.723: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5640 /apis/apps/v1/namespaces/deployment-5640/deployments/test-rolling-update-deployment ed2c914b-4854-45be-91e4-f594321235b4 3209498 1 2020-05-11 00:02:14 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-11 00:02:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 00:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002742488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 00:02:14 +0000 UTC,LastTransitionTime:2020-05-11 00:02:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-11 00:02:18 +0000 UTC,LastTransitionTime:2020-05-11 00:02:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 00:02:18.726: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-5640 /apis/apps/v1/namespaces/deployment-5640/replicasets/test-rolling-update-deployment-df7bb669b e0e721cf-9400-4335-973a-a86159fca96c 3209487 1 2020-05-11 00:02:14 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ed2c914b-4854-45be-91e4-f594321235b4 0xc0027429c0 0xc0027429c1}] [] [{kube-controller-manager Update apps/v1 2020-05-11 00:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed2c914b-4854-45be-91e4-f594321235b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002742a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 00:02:18.726: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 11 00:02:18.726: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5640 /apis/apps/v1/namespaces/deployment-5640/replicasets/test-rolling-update-controller b2931c18-1bab-490d-9351-bc8992f36541 3209496 2 2020-05-11 00:02:09 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ed2c914b-4854-45be-91e4-f594321235b4 0xc0027428af 0xc0027428c0}] [] [{e2e.test Update apps/v1 2020-05-11 00:02:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 00:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed2c914b-4854-45be-91e4-f594321235b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002742958 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 00:02:18.729: INFO: Pod "test-rolling-update-deployment-df7bb669b-6z566" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-6z566 test-rolling-update-deployment-df7bb669b- deployment-5640 /api/v1/namespaces/deployment-5640/pods/test-rolling-update-deployment-df7bb669b-6z566 367dcd47-4b58-43c4-ad83-edc9c3c94947 3209486 0 2020-05-11 00:02:14 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b e0e721cf-9400-4335-973a-a86159fca96c 0xc005ed05e0 0xc005ed05e1}] [] [{kube-controller-manager Update v1 2020-05-11 00:02:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0e721cf-9400-4335-973a-a86159fca96c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 00:02:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sxcnv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sxcnv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sxcnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:02:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:02:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.39,StartTime:2020-05-11 00:02:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 00:02:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://dbb7ec28500a7eb5ba12a7b3d07d7676cac1e7f33e90bb3ff378fd9b486f4f4f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:18.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5640" for this suite. • [SLOW TEST:9.277 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":59,"skipped":982,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:18.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:02:22.888: INFO: Waiting up to 5m0s for pod "client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3" in namespace "pods-380" to be "Succeeded or Failed" May 11 00:02:22.910: INFO: Pod "client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.63638ms May 11 00:02:24.937: INFO: Pod "client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048951421s May 11 00:02:26.941: INFO: Pod "client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05322636s STEP: Saw pod success May 11 00:02:26.941: INFO: Pod "client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3" satisfied condition "Succeeded or Failed" May 11 00:02:26.944: INFO: Trying to get logs from node latest-worker pod client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3 container env3cont: STEP: delete the pod May 11 00:02:26.996: INFO: Waiting for pod client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3 to disappear May 11 00:02:27.009: INFO: Pod client-envvars-3bd7ba3f-fcab-45a3-8b87-610e943977f3 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:27.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-380" for this suite. • [SLOW TEST:8.279 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":1029,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:27.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 00:02:27.088: INFO: Waiting up to 5m0s for pod "pod-a6876852-cd3e-4300-ac2b-2dd1ba559763" in namespace "emptydir-651" to be "Succeeded or Failed" May 11 00:02:27.093: INFO: Pod "pod-a6876852-cd3e-4300-ac2b-2dd1ba559763": Phase="Pending", Reason="", readiness=false. Elapsed: 4.678137ms May 11 00:02:29.097: INFO: Pod "pod-a6876852-cd3e-4300-ac2b-2dd1ba559763": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009411676s May 11 00:02:31.120: INFO: Pod "pod-a6876852-cd3e-4300-ac2b-2dd1ba559763": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032045462s STEP: Saw pod success May 11 00:02:31.120: INFO: Pod "pod-a6876852-cd3e-4300-ac2b-2dd1ba559763" satisfied condition "Succeeded or Failed" May 11 00:02:31.123: INFO: Trying to get logs from node latest-worker2 pod pod-a6876852-cd3e-4300-ac2b-2dd1ba559763 container test-container: STEP: delete the pod May 11 00:02:31.158: INFO: Waiting for pod pod-a6876852-cd3e-4300-ac2b-2dd1ba559763 to disappear May 11 00:02:31.169: INFO: Pod pod-a6876852-cd3e-4300-ac2b-2dd1ba559763 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:31.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-651" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":1029,"failed":0} ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:31.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:31.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-233" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":62,"skipped":1029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:31.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-aae054ec-7cbd-4d31-ae16-44a68727dabc STEP: Creating a pod to test consume configMaps May 11 00:02:31.705: INFO: Waiting up to 5m0s for pod "pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622" in namespace "configmap-9222" to be "Succeeded or Failed" May 11 00:02:31.708: INFO: Pod "pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.937097ms May 11 00:02:33.856: INFO: Pod "pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151018111s May 11 00:02:35.860: INFO: Pod "pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154939627s STEP: Saw pod success May 11 00:02:35.860: INFO: Pod "pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622" satisfied condition "Succeeded or Failed" May 11 00:02:35.864: INFO: Trying to get logs from node latest-worker pod pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622 container configmap-volume-test: STEP: delete the pod May 11 00:02:35.923: INFO: Waiting for pod pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622 to disappear May 11 00:02:35.936: INFO: Pod pod-configmaps-983f4f80-1c51-42a5-a347-de02b5d70622 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:35.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9222" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":1064,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:35.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 11 00:02:40.640: INFO: Successfully updated pod "annotationupdatea41eb284-25ec-44ec-ac42-ae69344ed441" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:44.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4661" for this suite. • [SLOW TEST:8.748 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":64,"skipped":1064,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:44.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:02:44.805: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1560bf2c-6ab6-48de-b672-e9cf8d01e42a" in namespace "security-context-test-6008" to be "Succeeded or Failed" May 11 00:02:44.811: INFO: Pod "busybox-privileged-false-1560bf2c-6ab6-48de-b672-e9cf8d01e42a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101694ms May 11 00:02:46.815: INFO: Pod "busybox-privileged-false-1560bf2c-6ab6-48de-b672-e9cf8d01e42a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010147474s May 11 00:02:48.818: INFO: Pod "busybox-privileged-false-1560bf2c-6ab6-48de-b672-e9cf8d01e42a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013664489s May 11 00:02:48.818: INFO: Pod "busybox-privileged-false-1560bf2c-6ab6-48de-b672-e9cf8d01e42a" satisfied condition "Succeeded or Failed" May 11 00:02:48.824: INFO: Got logs for pod "busybox-privileged-false-1560bf2c-6ab6-48de-b672-e9cf8d01e42a": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:48.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6008" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:48.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 11 00:02:48.921: INFO: Created pod &Pod{ObjectMeta:{dns-4619 dns-4619 /api/v1/namespaces/dns-4619/pods/dns-4619 3ff29db1-edd5-4b82-a7f1-79de11c755ed 3209758 0 2020-05-11 00:02:48 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-11 00:02:48 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5jqr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5jqr8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5jqr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 00:02:48.924: INFO: The status of Pod dns-4619 is Pending, waiting for it to be Running (with Ready = true) May 11 00:02:50.952: INFO: The status of Pod dns-4619 is Pending, waiting for it to be Running (with Ready = true) May 11 00:02:52.928: INFO: The status of Pod dns-4619 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 11 00:02:52.928: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4619 PodName:dns-4619 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:02:52.928: INFO: >>> kubeConfig: /root/.kube/config I0511 00:02:52.977923 7 log.go:172] (0xc002c84a50) (0xc00221ff40) Create stream I0511 00:02:52.977977 7 log.go:172] (0xc002c84a50) (0xc00221ff40) Stream added, broadcasting: 1 I0511 00:02:52.979748 7 log.go:172] (0xc002c84a50) Reply frame received for 1 I0511 00:02:52.979796 7 log.go:172] (0xc002c84a50) (0xc002c320a0) Create stream I0511 00:02:52.979813 7 log.go:172] (0xc002c84a50) (0xc002c320a0) Stream added, broadcasting: 3 I0511 00:02:52.980608 7 log.go:172] (0xc002c84a50) Reply frame received for 3 I0511 00:02:52.980631 7 log.go:172] (0xc002c84a50) (0xc002c32140) Create stream I0511 00:02:52.980641 7 log.go:172] (0xc002c84a50) (0xc002c32140) Stream added, broadcasting: 5 I0511 00:02:52.981696 7 log.go:172] (0xc002c84a50) Reply frame received for 5 I0511 00:02:53.083657 7 log.go:172] (0xc002c84a50) Data frame received for 3 I0511 00:02:53.083689 7 log.go:172] (0xc002c320a0) (3) Data frame handling I0511 00:02:53.083711 7 log.go:172] (0xc002c320a0) (3) Data frame sent I0511 00:02:53.084901 7 log.go:172] (0xc002c84a50) Data frame received for 3 I0511 00:02:53.084924 7 log.go:172] (0xc002c320a0) (3) Data frame handling I0511 00:02:53.084958 7 log.go:172] (0xc002c84a50) Data frame received for 5 I0511 00:02:53.084994 7 log.go:172] (0xc002c32140) (5) Data frame handling I0511 00:02:53.087065 7 log.go:172] (0xc002c84a50) Data frame received for 1 I0511 00:02:53.087094 7 log.go:172] (0xc00221ff40) (1) Data frame handling I0511 00:02:53.087119 7 log.go:172] (0xc00221ff40) (1) Data frame sent I0511 00:02:53.087137 7 log.go:172] (0xc002c84a50) (0xc00221ff40) Stream removed, broadcasting: 1 I0511 00:02:53.087156 7 log.go:172] (0xc002c84a50) Go away received I0511 00:02:53.087483 7 log.go:172] (0xc002c84a50) (0xc00221ff40) Stream removed, broadcasting: 1 I0511 00:02:53.087496 7 log.go:172] (0xc002c84a50) (0xc002c320a0) Stream removed, broadcasting: 3 I0511 00:02:53.087502 7 log.go:172] (0xc002c84a50) (0xc002c32140) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 11 00:02:53.087: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4619 PodName:dns-4619 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:02:53.087: INFO: >>> kubeConfig: /root/.kube/config I0511 00:02:53.118064 7 log.go:172] (0xc00234a420) (0xc0025e2960) Create stream I0511 00:02:53.118093 7 log.go:172] (0xc00234a420) (0xc0025e2960) Stream added, broadcasting: 1 I0511 00:02:53.123025 7 log.go:172] (0xc00234a420) Reply frame received for 1 I0511 00:02:53.123086 7 log.go:172] (0xc00234a420) (0xc002c321e0) Create stream I0511 00:02:53.123104 7 log.go:172] (0xc00234a420) (0xc002c321e0) Stream added, broadcasting: 3 I0511 00:02:53.125027 7 log.go:172] (0xc00234a420) Reply frame received for 3 I0511 00:02:53.125055 7 log.go:172] (0xc00234a420) (0xc002956fa0) Create stream I0511 00:02:53.125070 7 log.go:172] (0xc00234a420) (0xc002956fa0) Stream added, broadcasting: 5 I0511 00:02:53.127458 7 log.go:172] (0xc00234a420) Reply frame received for 5 I0511 00:02:53.207668 7 log.go:172] (0xc00234a420) Data frame received for 3 I0511 00:02:53.207697 7 log.go:172] (0xc002c321e0) (3) Data frame handling I0511 00:02:53.207715 7 log.go:172] (0xc002c321e0) (3) Data frame sent I0511 00:02:53.208977 7 log.go:172] (0xc00234a420) Data frame received for 3 I0511 00:02:53.209003 7 log.go:172] (0xc002c321e0) (3) Data frame handling I0511 00:02:53.209020 7 log.go:172] (0xc00234a420) Data frame received for 5 I0511 00:02:53.209030 7 log.go:172] (0xc002956fa0) (5) Data frame handling I0511 00:02:53.210844 7 log.go:172] (0xc00234a420) Data frame received for 1 I0511 00:02:53.210867 7 log.go:172] (0xc0025e2960) (1) Data frame handling I0511 00:02:53.210880 7 log.go:172] (0xc0025e2960) (1) Data frame sent I0511 00:02:53.211126 7 log.go:172] (0xc00234a420) (0xc0025e2960) Stream removed, broadcasting: 1 I0511 00:02:53.211180 7 log.go:172] (0xc00234a420) Go away received I0511 00:02:53.211323 7 log.go:172] (0xc00234a420) (0xc0025e2960) Stream removed, broadcasting: 1 I0511 00:02:53.211352 7 log.go:172] (0xc00234a420) (0xc002c321e0) Stream removed, broadcasting: 3 I0511 00:02:53.211364 7 log.go:172] (0xc00234a420) (0xc002956fa0) Stream removed, broadcasting: 5 May 11 00:02:53.211: INFO: Deleting pod dns-4619... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:53.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4619" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":66,"skipped":1102,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:53.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7307.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7307.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7307.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7307.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7307.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7307.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:02:59.791: INFO: DNS probes using dns-7307/dns-test-a73d7388-5c37-4ac7-8cb2-70414a6c7bea succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:02:59.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7307" for this suite. • [SLOW TEST:6.633 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":67,"skipped":1122,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:02:59.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0511 00:03:40.713633 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 00:03:40.713: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:03:40.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6186" for this suite. • [SLOW TEST:40.768 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":68,"skipped":1137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:03:40.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:04:00.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-444" for this suite. • [SLOW TEST:20.236 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":69,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:04:00.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:06:01.038: INFO: Deleting pod "var-expansion-385ae1e0-b01c-45df-a95c-7ade8fe6ed19" in namespace "var-expansion-1172" May 11 00:06:01.043: INFO: Wait up to 5m0s for pod "var-expansion-385ae1e0-b01c-45df-a95c-7ade8fe6ed19" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:06:07.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1172" for this suite. • [SLOW TEST:126.123 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":70,"skipped":1223,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:06:07.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 00:06:07.177: INFO: Waiting up to 5m0s for pod "downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3" in namespace "downward-api-5235" to be "Succeeded or Failed" May 11 00:06:07.192: INFO: Pod "downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.033257ms May 11 00:06:09.196: INFO: Pod "downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018615973s May 11 00:06:11.200: INFO: Pod "downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023018213s STEP: Saw pod success May 11 00:06:11.200: INFO: Pod "downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3" satisfied condition "Succeeded or Failed" May 11 00:06:11.203: INFO: Trying to get logs from node latest-worker pod downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3 container dapi-container: STEP: delete the pod May 11 00:06:11.296: INFO: Waiting for pod downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3 to disappear May 11 00:06:11.307: INFO: Pod downward-api-a5a9e6a1-60f6-4e6b-a8ed-4ca11ac989c3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:06:11.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5235" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1232,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:06:11.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:06:26.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7035" for this suite. STEP: Destroying namespace "nsdeletetest-9501" for this suite. May 11 00:06:26.651: INFO: Namespace nsdeletetest-9501 was already deleted STEP: Destroying namespace "nsdeletetest-1325" for this suite. • [SLOW TEST:15.341 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":72,"skipped":1241,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:06:26.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 11 00:06:33.264: INFO: Successfully updated pod "adopt-release-bvfrh" STEP: Checking that the Job readopts the Pod May 11 00:06:33.265: INFO: Waiting up to 15m0s for pod "adopt-release-bvfrh" in namespace "job-2600" to be "adopted" May 11 00:06:33.273: INFO: Pod "adopt-release-bvfrh": Phase="Running", Reason="", readiness=true. Elapsed: 8.018096ms May 11 00:06:35.277: INFO: Pod "adopt-release-bvfrh": Phase="Running", Reason="", readiness=true. Elapsed: 2.012739782s May 11 00:06:35.277: INFO: Pod "adopt-release-bvfrh" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 11 00:06:35.789: INFO: Successfully updated pod "adopt-release-bvfrh" STEP: Checking that the Job releases the Pod May 11 00:06:35.789: INFO: Waiting up to 15m0s for pod "adopt-release-bvfrh" in namespace "job-2600" to be "released" May 11 00:06:35.793: INFO: Pod "adopt-release-bvfrh": Phase="Running", Reason="", readiness=true. Elapsed: 3.823252ms May 11 00:06:37.796: INFO: Pod "adopt-release-bvfrh": Phase="Running", Reason="", readiness=true. Elapsed: 2.006803586s May 11 00:06:37.796: INFO: Pod "adopt-release-bvfrh" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:06:37.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2600" for this suite. • [SLOW TEST:11.149 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":73,"skipped":1248,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:06:37.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 11 00:06:37.946: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5628 /api/v1/namespaces/watch-5628/configmaps/e2e-watch-test-label-changed 8a2451a6-9d8e-4f63-a36c-cd623fcfe670 3210942 0 2020-05-11 00:06:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 00:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:06:37.946: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5628 /api/v1/namespaces/watch-5628/configmaps/e2e-watch-test-label-changed 8a2451a6-9d8e-4f63-a36c-cd623fcfe670 3210943 0 2020-05-11 00:06:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 00:06:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:06:37.946: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5628 /api/v1/namespaces/watch-5628/configmaps/e2e-watch-test-label-changed 8a2451a6-9d8e-4f63-a36c-cd623fcfe670 3210944 0 2020-05-11 00:06:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 00:06:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 11 00:06:48.046: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5628 /api/v1/namespaces/watch-5628/configmaps/e2e-watch-test-label-changed 8a2451a6-9d8e-4f63-a36c-cd623fcfe670 3210996 0 2020-05-11 00:06:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 00:06:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:06:48.046: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5628 /api/v1/namespaces/watch-5628/configmaps/e2e-watch-test-label-changed 8a2451a6-9d8e-4f63-a36c-cd623fcfe670 3210997 0 2020-05-11 00:06:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 00:06:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:06:48.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5628 /api/v1/namespaces/watch-5628/configmaps/e2e-watch-test-label-changed 8a2451a6-9d8e-4f63-a36c-cd623fcfe670 3210998 0 2020-05-11 00:06:37 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 00:06:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:06:48.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5628" for this suite. • [SLOW TEST:10.250 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":74,"skipped":1250,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:06:48.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-2322/secret-test-1a7e0e08-6d4f-49fa-a8d6-4d87e225ca57 STEP: Creating a pod to test consume secrets May 11 00:06:48.178: INFO: Waiting up to 5m0s for pod "pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b" in namespace "secrets-2322" to be "Succeeded or Failed" May 11 00:06:48.182: INFO: Pod "pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310835ms May 11 00:06:50.343: INFO: Pod "pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164523975s May 11 00:06:52.347: INFO: Pod "pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168993312s STEP: Saw pod success May 11 00:06:52.348: INFO: Pod "pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b" satisfied condition "Succeeded or Failed" May 11 00:06:52.351: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b container env-test: STEP: delete the pod May 11 00:06:52.404: INFO: Waiting for pod pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b to disappear May 11 00:06:52.409: INFO: Pod pod-configmaps-a19ccf73-6355-484a-b684-e8f2da6db62b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:06:52.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2322" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1270,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:06:52.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7574 May 11 00:06:56.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7574 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 11 00:06:56.761: INFO: stderr: "I0511 00:06:56.642449 933 log.go:172] (0xc000b2ac60) (0xc000204f00) Create stream\nI0511 00:06:56.642504 933 log.go:172] (0xc000b2ac60) (0xc000204f00) Stream added, broadcasting: 1\nI0511 00:06:56.644856 933 log.go:172] (0xc000b2ac60) Reply frame received for 1\nI0511 00:06:56.644882 933 log.go:172] (0xc000b2ac60) (0xc0002054a0) Create stream\nI0511 00:06:56.644890 933 log.go:172] (0xc000b2ac60) (0xc0002054a0) Stream added, broadcasting: 3\nI0511 00:06:56.645924 933 log.go:172] (0xc000b2ac60) Reply frame received for 3\nI0511 00:06:56.645983 933 log.go:172] (0xc000b2ac60) (0xc000205ae0) Create stream\nI0511 00:06:56.646007 933 log.go:172] (0xc000b2ac60) (0xc000205ae0) Stream added, broadcasting: 5\nI0511 00:06:56.646911 933 log.go:172] (0xc000b2ac60) Reply frame received for 5\nI0511 00:06:56.748682 933 log.go:172] (0xc000b2ac60) Data frame received for 5\nI0511 00:06:56.748708 933 log.go:172] (0xc000205ae0) (5) Data frame handling\nI0511 00:06:56.748723 933 log.go:172] (0xc000205ae0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0511 00:06:56.753097 933 log.go:172] (0xc000b2ac60) Data frame received for 3\nI0511 00:06:56.753290 933 log.go:172] (0xc0002054a0) (3) Data frame handling\nI0511 00:06:56.753329 933 log.go:172] (0xc0002054a0) (3) Data frame sent\nI0511 00:06:56.753902 933 log.go:172] (0xc000b2ac60) Data frame received for 3\nI0511 00:06:56.753931 933 log.go:172] (0xc0002054a0) (3) Data frame handling\nI0511 00:06:56.754148 933 log.go:172] (0xc000b2ac60) Data frame received for 5\nI0511 00:06:56.754165 933 log.go:172] (0xc000205ae0) (5) Data frame handling\nI0511 00:06:56.755880 933 log.go:172] (0xc000b2ac60) Data frame received for 1\nI0511 00:06:56.755906 933 log.go:172] (0xc000204f00) (1) Data frame handling\nI0511 00:06:56.755921 933 log.go:172] (0xc000204f00) (1) Data frame sent\nI0511 00:06:56.755955 933 log.go:172] (0xc000b2ac60) (0xc000204f00) Stream removed, broadcasting: 1\nI0511 00:06:56.755991 933 log.go:172] (0xc000b2ac60) Go away received\nI0511 00:06:56.756437 933 log.go:172] (0xc000b2ac60) (0xc000204f00) Stream removed, broadcasting: 1\nI0511 00:06:56.756456 933 log.go:172] (0xc000b2ac60) (0xc0002054a0) Stream removed, broadcasting: 3\nI0511 00:06:56.756466 933 log.go:172] (0xc000b2ac60) (0xc000205ae0) Stream removed, broadcasting: 5\n" May 11 00:06:56.762: INFO: stdout: "iptables" May 11 00:06:56.762: INFO: proxyMode: iptables May 11 00:06:56.767: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 00:06:56.816: INFO: Pod kube-proxy-mode-detector still exists May 11 00:06:58.816: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 00:06:58.820: INFO: Pod kube-proxy-mode-detector still exists May 11 00:07:00.816: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 00:07:00.820: INFO: Pod kube-proxy-mode-detector still exists May 11 00:07:02.816: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 00:07:02.821: INFO: Pod kube-proxy-mode-detector still exists May 11 00:07:04.816: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 00:07:04.820: INFO: Pod kube-proxy-mode-detector still exists May 11 00:07:06.816: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 00:07:06.820: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-7574 STEP: creating replication controller affinity-clusterip-timeout in namespace services-7574 I0511 00:07:06.909444 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7574, replica count: 3 I0511 00:07:09.959923 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:07:12.960196 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 00:07:12.966: INFO: Creating new exec pod May 11 00:07:17.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7574 execpod-affinitydtbc4 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 11 00:07:18.222: INFO: stderr: "I0511 00:07:18.129658 953 log.go:172] (0xc00099d130) (0xc00066bae0) Create stream\nI0511 00:07:18.129755 953 log.go:172] (0xc00099d130) (0xc00066bae0) Stream added, broadcasting: 1\nI0511 00:07:18.134768 953 log.go:172] (0xc00099d130) Reply frame received for 1\nI0511 00:07:18.134808 953 log.go:172] (0xc00099d130) (0xc000808460) Create stream\nI0511 00:07:18.134821 953 log.go:172] (0xc00099d130) (0xc000808460) Stream added, broadcasting: 3\nI0511 00:07:18.135747 953 log.go:172] (0xc00099d130) Reply frame received for 3\nI0511 00:07:18.135804 953 log.go:172] (0xc00099d130) (0xc000808d20) Create stream\nI0511 00:07:18.135822 953 log.go:172] (0xc00099d130) (0xc000808d20) Stream added, broadcasting: 5\nI0511 00:07:18.136774 953 log.go:172] (0xc00099d130) Reply frame received for 5\nI0511 00:07:18.213998 953 log.go:172] (0xc00099d130) Data frame received for 5\nI0511 00:07:18.214038 953 log.go:172] (0xc000808d20) (5) Data frame handling\nI0511 00:07:18.214059 953 log.go:172] (0xc000808d20) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0511 00:07:18.214650 953 log.go:172] (0xc00099d130) Data frame received for 5\nI0511 00:07:18.214681 953 log.go:172] (0xc000808d20) (5) Data frame handling\nI0511 00:07:18.214712 953 log.go:172] (0xc000808d20) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0511 00:07:18.215032 953 log.go:172] (0xc00099d130) Data frame received for 5\nI0511 00:07:18.215062 953 log.go:172] (0xc000808d20) (5) Data frame handling\nI0511 00:07:18.215227 953 log.go:172] (0xc00099d130) Data frame received for 3\nI0511 00:07:18.215254 953 log.go:172] (0xc000808460) (3) Data frame handling\nI0511 00:07:18.217310 953 log.go:172] (0xc00099d130) Data frame received for 1\nI0511 00:07:18.217336 953 log.go:172] (0xc00066bae0) (1) Data frame handling\nI0511 00:07:18.217353 953 log.go:172] (0xc00066bae0) (1) Data frame sent\nI0511 00:07:18.217377 953 log.go:172] (0xc00099d130) (0xc00066bae0) Stream removed, broadcasting: 1\nI0511 00:07:18.217404 953 log.go:172] (0xc00099d130) Go away received\nI0511 00:07:18.217849 953 log.go:172] (0xc00099d130) (0xc00066bae0) Stream removed, broadcasting: 1\nI0511 00:07:18.217868 953 log.go:172] (0xc00099d130) (0xc000808460) Stream removed, broadcasting: 3\nI0511 00:07:18.217879 953 log.go:172] (0xc00099d130) (0xc000808d20) Stream removed, broadcasting: 5\n" May 11 00:07:18.222: INFO: stdout: "" May 11 00:07:18.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7574 execpod-affinitydtbc4 -- /bin/sh -x -c nc -zv -t -w 2 10.109.139.167 80' May 11 00:07:18.424: INFO: stderr: "I0511 00:07:18.353005 974 log.go:172] (0xc000987810) (0xc00096c460) Create stream\nI0511 00:07:18.353071 974 log.go:172] (0xc000987810) (0xc00096c460) Stream added, broadcasting: 1\nI0511 00:07:18.357347 974 log.go:172] (0xc000987810) Reply frame received for 1\nI0511 00:07:18.357414 974 log.go:172] (0xc000987810) (0xc000808280) Create stream\nI0511 00:07:18.357436 974 log.go:172] (0xc000987810) (0xc000808280) Stream added, broadcasting: 3\nI0511 00:07:18.358429 974 log.go:172] (0xc000987810) Reply frame received for 3\nI0511 00:07:18.358464 974 log.go:172] (0xc000987810) (0xc0006c25a0) Create stream\nI0511 00:07:18.358475 974 log.go:172] (0xc000987810) (0xc0006c25a0) Stream added, broadcasting: 5\nI0511 00:07:18.359240 974 log.go:172] (0xc000987810) Reply frame received for 5\nI0511 00:07:18.417382 974 log.go:172] (0xc000987810) Data frame received for 3\nI0511 00:07:18.417416 974 log.go:172] (0xc000808280) (3) Data frame handling\nI0511 00:07:18.417435 974 log.go:172] (0xc000987810) Data frame received for 5\nI0511 00:07:18.417442 974 log.go:172] (0xc0006c25a0) (5) Data frame handling\nI0511 00:07:18.417458 974 log.go:172] (0xc0006c25a0) (5) Data frame sent\nI0511 00:07:18.417475 974 log.go:172] (0xc000987810) Data frame received for 5\nI0511 00:07:18.417487 974 log.go:172] (0xc0006c25a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.139.167 80\nConnection to 10.109.139.167 80 port [tcp/http] succeeded!\nI0511 00:07:18.418951 974 log.go:172] (0xc000987810) Data frame received for 1\nI0511 00:07:18.418991 974 log.go:172] (0xc00096c460) (1) Data frame handling\nI0511 00:07:18.419015 974 log.go:172] (0xc00096c460) (1) Data frame sent\nI0511 00:07:18.419049 974 log.go:172] (0xc000987810) (0xc00096c460) Stream removed, broadcasting: 1\nI0511 00:07:18.419071 974 log.go:172] (0xc000987810) Go away received\nI0511 00:07:18.419503 974 log.go:172] (0xc000987810) (0xc00096c460) Stream removed, broadcasting: 1\nI0511 00:07:18.419525 974 log.go:172] (0xc000987810) (0xc000808280) Stream removed, broadcasting: 3\nI0511 00:07:18.419539 974 log.go:172] (0xc000987810) (0xc0006c25a0) Stream removed, broadcasting: 5\n" May 11 00:07:18.424: INFO: stdout: "" May 11 00:07:18.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7574 execpod-affinitydtbc4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.109.139.167:80/ ; done' May 11 00:07:18.726: INFO: stderr: "I0511 00:07:18.560579 995 log.go:172] (0xc000a67550) (0xc000825ae0) Create stream\nI0511 00:07:18.560667 995 log.go:172] (0xc000a67550) (0xc000825ae0) Stream added, broadcasting: 1\nI0511 00:07:18.565594 995 log.go:172] (0xc000a67550) Reply frame received for 1\nI0511 00:07:18.565625 995 log.go:172] (0xc000a67550) (0xc00080a460) Create stream\nI0511 00:07:18.565632 995 log.go:172] (0xc000a67550) (0xc00080a460) Stream added, broadcasting: 3\nI0511 00:07:18.566483 995 log.go:172] (0xc000a67550) Reply frame received for 3\nI0511 00:07:18.566516 995 log.go:172] (0xc000a67550) (0xc000684460) Create stream\nI0511 00:07:18.566526 995 log.go:172] (0xc000a67550) (0xc000684460) Stream added, broadcasting: 5\nI0511 00:07:18.567293 995 log.go:172] (0xc000a67550) Reply frame received for 5\nI0511 00:07:18.622286 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.622332 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.622368 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.622392 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.622424 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.622445 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.629001 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.629020 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.629037 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.629491 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.629516 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.629530 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.629688 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.629704 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.629716 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.639692 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.639710 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.639726 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.640264 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.640301 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.640316 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.640333 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.640344 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.640359 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.644779 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.644796 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.644810 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.645598 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.645629 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.645642 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.645656 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.645664 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.645672 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.651300 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.651324 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.651344 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.651718 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.651753 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.651766 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.651786 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.651797 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.651816 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.655749 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.655763 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.655769 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.656345 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.656370 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.656380 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.656393 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.656400 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.656406 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.660790 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.660816 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.660849 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.661650 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.661699 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.661718 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.661760 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.661793 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.661831 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.667658 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.667680 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.667701 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.668358 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.668370 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.668381 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.668441 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.668451 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.668456 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.674501 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.674530 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.674547 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.675181 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.675359 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.675389 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.675687 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.675735 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.675754 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.679993 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.680010 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.680022 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.680360 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.680386 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.680404 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.680413 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.680424 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.680430 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.684835 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.684850 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.684863 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.685432 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.685453 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.685469 995 log.go:172] (0xc000684460) (5) Data frame sent\nI0511 00:07:18.685483 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.685495 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.685515 995 log.go:172] (0xc00080a460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.688906 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.688924 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.688938 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.689471 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.689497 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.689513 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.689528 995 log.go:172] (0xc000684460) (5) Data frame sent\nI0511 00:07:18.689537 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.689557 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.689568 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.689577 995 log.go:172] (0xc00080a460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.689631 995 log.go:172] (0xc000684460) (5) Data frame sent\nI0511 00:07:18.694951 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.694969 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.694980 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.695440 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.695467 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.695497 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.695543 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.695560 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.695572 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.699585 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.699603 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.699618 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.700047 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.700062 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.700094 995 log.go:172] (0xc000684460) (5) Data frame sent\nI0511 00:07:18.700108 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.700115 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.700123 995 log.go:172] (0xc00080a460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.705882 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.705909 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.705939 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.706636 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.706656 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.706668 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.706694 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.706705 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.706717 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.711420 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.711438 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.711458 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.711897 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.711922 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.711943 995 log.go:172] (0xc000684460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.712022 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.712047 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.712059 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.717879 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.717901 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.717920 995 log.go:172] (0xc00080a460) (3) Data frame sent\nI0511 00:07:18.718652 995 log.go:172] (0xc000a67550) Data frame received for 3\nI0511 00:07:18.718684 995 log.go:172] (0xc00080a460) (3) Data frame handling\nI0511 00:07:18.718835 995 log.go:172] (0xc000a67550) Data frame received for 5\nI0511 00:07:18.718857 995 log.go:172] (0xc000684460) (5) Data frame handling\nI0511 00:07:18.720526 995 log.go:172] (0xc000a67550) Data frame received for 1\nI0511 00:07:18.720551 995 log.go:172] (0xc000825ae0) (1) Data frame handling\nI0511 00:07:18.720572 995 log.go:172] (0xc000825ae0) (1) Data frame sent\nI0511 00:07:18.720588 995 log.go:172] (0xc000a67550) (0xc000825ae0) Stream removed, broadcasting: 1\nI0511 00:07:18.720740 995 log.go:172] (0xc000a67550) Go away received\nI0511 00:07:18.720998 995 log.go:172] (0xc000a67550) (0xc000825ae0) Stream removed, broadcasting: 1\nI0511 00:07:18.721025 995 log.go:172] (0xc000a67550) (0xc00080a460) Stream removed, broadcasting: 3\nI0511 00:07:18.721042 995 log.go:172] (0xc000a67550) (0xc000684460) Stream removed, broadcasting: 5\n" May 11 00:07:18.727: INFO: stdout: "\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp\naffinity-clusterip-timeout-724dp" May 11 00:07:18.727: INFO: Received response from host: May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Received response from host: affinity-clusterip-timeout-724dp May 11 00:07:18.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7574 execpod-affinitydtbc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.109.139.167:80/' May 11 00:07:18.947: INFO: stderr: "I0511 00:07:18.876456 1015 log.go:172] (0xc00003a6e0) (0xc0005f3c20) Create stream\nI0511 00:07:18.876526 1015 log.go:172] (0xc00003a6e0) (0xc0005f3c20) Stream added, broadcasting: 1\nI0511 00:07:18.878949 1015 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0511 00:07:18.879017 1015 log.go:172] (0xc00003a6e0) (0xc0004a4d20) Create stream\nI0511 00:07:18.879042 1015 log.go:172] (0xc00003a6e0) (0xc0004a4d20) Stream added, broadcasting: 3\nI0511 00:07:18.879873 1015 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0511 00:07:18.879909 1015 log.go:172] (0xc00003a6e0) (0xc0001395e0) Create stream\nI0511 00:07:18.879919 1015 log.go:172] (0xc00003a6e0) (0xc0001395e0) Stream added, broadcasting: 5\nI0511 00:07:18.880699 1015 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0511 00:07:18.938538 1015 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0511 00:07:18.938561 1015 log.go:172] (0xc0001395e0) (5) Data frame handling\nI0511 00:07:18.938576 1015 log.go:172] (0xc0001395e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:18.940809 1015 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0511 00:07:18.940828 1015 log.go:172] (0xc0004a4d20) (3) Data frame handling\nI0511 00:07:18.940845 1015 log.go:172] (0xc0004a4d20) (3) Data frame sent\nI0511 00:07:18.941588 1015 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0511 00:07:18.941617 1015 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0511 00:07:18.941645 1015 log.go:172] (0xc0004a4d20) (3) Data frame handling\nI0511 00:07:18.941666 1015 log.go:172] (0xc0001395e0) (5) Data frame handling\nI0511 00:07:18.943442 1015 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0511 00:07:18.943532 1015 log.go:172] (0xc0005f3c20) (1) Data frame handling\nI0511 00:07:18.943557 1015 log.go:172] (0xc0005f3c20) (1) Data frame sent\nI0511 00:07:18.943568 1015 log.go:172] (0xc00003a6e0) (0xc0005f3c20) Stream removed, broadcasting: 1\nI0511 00:07:18.943581 1015 log.go:172] (0xc00003a6e0) Go away received\nI0511 00:07:18.943878 1015 log.go:172] (0xc00003a6e0) (0xc0005f3c20) Stream removed, broadcasting: 1\nI0511 00:07:18.943894 1015 log.go:172] (0xc00003a6e0) (0xc0004a4d20) Stream removed, broadcasting: 3\nI0511 00:07:18.943900 1015 log.go:172] (0xc00003a6e0) (0xc0001395e0) Stream removed, broadcasting: 5\n" May 11 00:07:18.947: INFO: stdout: "affinity-clusterip-timeout-724dp" May 11 00:07:33.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7574 execpod-affinitydtbc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.109.139.167:80/' May 11 00:07:34.180: INFO: stderr: "I0511 00:07:34.081842 1036 log.go:172] (0xc0005da210) (0xc0006025a0) Create stream\nI0511 00:07:34.081922 1036 log.go:172] (0xc0005da210) (0xc0006025a0) Stream added, broadcasting: 1\nI0511 00:07:34.085759 1036 log.go:172] (0xc0005da210) Reply frame received for 1\nI0511 00:07:34.085806 1036 log.go:172] (0xc0005da210) (0xc0004dcdc0) Create stream\nI0511 00:07:34.085820 1036 log.go:172] (0xc0005da210) (0xc0004dcdc0) Stream added, broadcasting: 3\nI0511 00:07:34.086707 1036 log.go:172] (0xc0005da210) Reply frame received for 3\nI0511 00:07:34.086756 1036 log.go:172] (0xc0005da210) (0xc0000dcfa0) Create stream\nI0511 00:07:34.086779 1036 log.go:172] (0xc0005da210) (0xc0000dcfa0) Stream added, broadcasting: 5\nI0511 00:07:34.087700 1036 log.go:172] (0xc0005da210) Reply frame received for 5\nI0511 00:07:34.167595 1036 log.go:172] (0xc0005da210) Data frame received for 5\nI0511 00:07:34.167629 1036 log.go:172] (0xc0000dcfa0) (5) Data frame handling\nI0511 00:07:34.167644 1036 log.go:172] (0xc0000dcfa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:34.172288 1036 log.go:172] (0xc0005da210) Data frame received for 3\nI0511 00:07:34.172326 1036 log.go:172] (0xc0004dcdc0) (3) Data frame handling\nI0511 00:07:34.172350 1036 log.go:172] (0xc0004dcdc0) (3) Data frame sent\nI0511 00:07:34.173333 1036 log.go:172] (0xc0005da210) Data frame received for 3\nI0511 00:07:34.173358 1036 log.go:172] (0xc0004dcdc0) (3) Data frame handling\nI0511 00:07:34.173620 1036 log.go:172] (0xc0005da210) Data frame received for 5\nI0511 00:07:34.173644 1036 log.go:172] (0xc0000dcfa0) (5) Data frame handling\nI0511 00:07:34.175368 1036 log.go:172] (0xc0005da210) Data frame received for 1\nI0511 00:07:34.175402 1036 log.go:172] (0xc0006025a0) (1) Data frame handling\nI0511 00:07:34.175422 1036 log.go:172] (0xc0006025a0) (1) Data frame sent\nI0511 00:07:34.175446 1036 log.go:172] (0xc0005da210) (0xc0006025a0) Stream removed, broadcasting: 1\nI0511 00:07:34.175487 1036 log.go:172] (0xc0005da210) Go away received\nI0511 00:07:34.175937 1036 log.go:172] (0xc0005da210) (0xc0006025a0) Stream removed, broadcasting: 1\nI0511 00:07:34.175964 1036 log.go:172] (0xc0005da210) (0xc0004dcdc0) Stream removed, broadcasting: 3\nI0511 00:07:34.175977 1036 log.go:172] (0xc0005da210) (0xc0000dcfa0) Stream removed, broadcasting: 5\n" May 11 00:07:34.180: INFO: stdout: "affinity-clusterip-timeout-724dp" May 11 00:07:49.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7574 execpod-affinitydtbc4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.109.139.167:80/' May 11 00:07:49.431: INFO: stderr: "I0511 00:07:49.329984 1058 log.go:172] (0xc00099fa20) (0xc000bee280) Create stream\nI0511 00:07:49.330037 1058 log.go:172] (0xc00099fa20) (0xc000bee280) Stream added, broadcasting: 1\nI0511 00:07:49.333255 1058 log.go:172] (0xc00099fa20) Reply frame received for 1\nI0511 00:07:49.333281 1058 log.go:172] (0xc00099fa20) (0xc0006ee5a0) Create stream\nI0511 00:07:49.333287 1058 log.go:172] (0xc00099fa20) (0xc0006ee5a0) Stream added, broadcasting: 3\nI0511 00:07:49.334052 1058 log.go:172] (0xc00099fa20) Reply frame received for 3\nI0511 00:07:49.334081 1058 log.go:172] (0xc00099fa20) (0xc0004d2dc0) Create stream\nI0511 00:07:49.334089 1058 log.go:172] (0xc00099fa20) (0xc0004d2dc0) Stream added, broadcasting: 5\nI0511 00:07:49.334705 1058 log.go:172] (0xc00099fa20) Reply frame received for 5\nI0511 00:07:49.421403 1058 log.go:172] (0xc00099fa20) Data frame received for 5\nI0511 00:07:49.421440 1058 log.go:172] (0xc0004d2dc0) (5) Data frame handling\nI0511 00:07:49.421468 1058 log.go:172] (0xc0004d2dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.109.139.167:80/\nI0511 00:07:49.424548 1058 log.go:172] (0xc00099fa20) Data frame received for 3\nI0511 00:07:49.424576 1058 log.go:172] (0xc0006ee5a0) (3) Data frame handling\nI0511 00:07:49.424588 1058 log.go:172] (0xc0006ee5a0) (3) Data frame sent\nI0511 00:07:49.425427 1058 log.go:172] (0xc00099fa20) Data frame received for 5\nI0511 00:07:49.425441 1058 log.go:172] (0xc0004d2dc0) (5) Data frame handling\nI0511 00:07:49.425465 1058 log.go:172] (0xc00099fa20) Data frame received for 3\nI0511 00:07:49.425493 1058 log.go:172] (0xc0006ee5a0) (3) Data frame handling\nI0511 00:07:49.426808 1058 log.go:172] (0xc00099fa20) Data frame received for 1\nI0511 00:07:49.426831 1058 log.go:172] (0xc000bee280) (1) Data frame handling\nI0511 00:07:49.426855 1058 log.go:172] (0xc000bee280) (1) Data frame sent\nI0511 00:07:49.426880 1058 log.go:172] (0xc00099fa20) (0xc000bee280) Stream removed, broadcasting: 1\nI0511 00:07:49.426902 1058 log.go:172] (0xc00099fa20) Go away received\nI0511 00:07:49.427213 1058 log.go:172] (0xc00099fa20) (0xc000bee280) Stream removed, broadcasting: 1\nI0511 00:07:49.427227 1058 log.go:172] (0xc00099fa20) (0xc0006ee5a0) Stream removed, broadcasting: 3\nI0511 00:07:49.427233 1058 log.go:172] (0xc00099fa20) (0xc0004d2dc0) Stream removed, broadcasting: 5\n" May 11 00:07:49.431: INFO: stdout: "affinity-clusterip-timeout-9r57b" May 11 00:07:49.431: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-7574, will wait for the garbage collector to delete the pods May 11 00:07:49.528: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.965325ms May 11 00:07:49.929: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.555406ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:08:05.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7574" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:72.978 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":76,"skipped":1284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:08:05.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 00:08:06.109: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 00:08:08.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:08:10.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752486, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:08:13.294: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:08:13.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:08:14.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2648" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.239 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":77,"skipped":1347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:08:14.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:08:25.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6548" for this suite. • [SLOW TEST:11.146 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":78,"skipped":1380,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:08:25.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:08:25.832: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:08:27.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7446" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":79,"skipped":1388,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:08:27.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 11 00:08:27.153: INFO: Waiting up to 5m0s for pod "client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5" in namespace "containers-5851" to be "Succeeded or Failed" May 11 00:08:27.156: INFO: Pod "client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.411512ms May 11 00:08:29.161: INFO: Pod "client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008284156s May 11 00:08:31.188: INFO: Pod "client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035031957s STEP: Saw pod success May 11 00:08:31.188: INFO: Pod "client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5" satisfied condition "Succeeded or Failed" May 11 00:08:31.191: INFO: Trying to get logs from node latest-worker2 pod client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5 container test-container: STEP: delete the pod May 11 00:08:31.259: INFO: Waiting for pod client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5 to disappear May 11 00:08:31.281: INFO: Pod client-containers-bf8d7e57-6657-4900-b987-1a2a1f53dea5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:08:31.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5851" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1410,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:08:31.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 11 00:08:31.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-4430 -- logs-generator --log-lines-total 100 --run-duration 20s' May 11 00:08:31.517: INFO: stderr: "" May 11 00:08:31.517: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 11 00:08:31.517: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 11 00:08:31.517: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4430" to be "running and ready, or succeeded" May 11 00:08:31.526: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56648ms May 11 00:08:33.529: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011869603s May 11 00:08:35.534: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.016699344s May 11 00:08:35.534: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 11 00:08:35.534: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 11 00:08:35.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4430' May 11 00:08:35.660: INFO: stderr: "" May 11 00:08:35.660: INFO: stdout: "I0511 00:08:34.097348 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/jf2q 201\nI0511 00:08:34.297515 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/zjt 446\nI0511 00:08:34.497537 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/nzg 458\nI0511 00:08:34.697559 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/gksq 496\nI0511 00:08:34.897529 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/rqk 525\nI0511 00:08:35.097822 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/mnds 457\nI0511 00:08:35.297498 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9cf 446\nI0511 00:08:35.497529 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/9pz 508\n" STEP: limiting log lines May 11 00:08:35.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4430 --tail=1' May 11 00:08:35.780: INFO: stderr: "" May 11 00:08:35.780: INFO: stdout: "I0511 00:08:35.697501 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/z45 222\n" May 11 00:08:35.780: INFO: got output "I0511 00:08:35.697501 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/z45 222\n" STEP: limiting log bytes May 11 00:08:35.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4430 --limit-bytes=1' May 11 00:08:35.908: INFO: stderr: "" May 11 00:08:35.908: INFO: stdout: "I" May 11 00:08:35.908: INFO: got output "I" STEP: exposing timestamps May 11 00:08:35.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4430 --tail=1 --timestamps' May 11 00:08:36.020: INFO: stderr: "" May 11 00:08:36.020: INFO: stdout: "2020-05-11T00:08:35.897671766Z I0511 00:08:35.897500 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4ht4 235\n" May 11 00:08:36.020: INFO: got output "2020-05-11T00:08:35.897671766Z I0511 00:08:35.897500 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4ht4 235\n" STEP: restricting to a time range May 11 00:08:38.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4430 --since=1s' May 11 00:08:38.647: INFO: stderr: "" May 11 00:08:38.647: INFO: stdout: "I0511 00:08:37.697564 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/nl6 349\nI0511 00:08:37.897502 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/vgqt 463\nI0511 00:08:38.097540 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/pk46 261\nI0511 00:08:38.297566 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/ns6 593\nI0511 00:08:38.497520 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/rzh 224\n" May 11 00:08:38.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4430 --since=24h' May 11 00:08:38.770: INFO: stderr: "" May 11 00:08:38.770: INFO: stdout: "I0511 00:08:34.097348 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/jf2q 201\nI0511 00:08:34.297515 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/zjt 446\nI0511 00:08:34.497537 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/nzg 458\nI0511 00:08:34.697559 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/gksq 496\nI0511 00:08:34.897529 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/rqk 525\nI0511 00:08:35.097822 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/mnds 457\nI0511 00:08:35.297498 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/9cf 446\nI0511 00:08:35.497529 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/9pz 508\nI0511 00:08:35.697501 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/z45 222\nI0511 00:08:35.897500 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4ht4 235\nI0511 00:08:36.097537 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/qv2n 375\nI0511 00:08:36.297517 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/9z8 328\nI0511 00:08:36.497595 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/9xh7 378\nI0511 00:08:36.697500 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/pz2 530\nI0511 00:08:36.897476 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/267w 574\nI0511 00:08:37.097611 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/h5gc 370\nI0511 00:08:37.297470 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cb99 333\nI0511 00:08:37.497490 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/tkxn 446\nI0511 00:08:37.697564 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/nl6 349\nI0511 00:08:37.897502 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/vgqt 463\nI0511 00:08:38.097540 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/pk46 261\nI0511 00:08:38.297566 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/ns6 593\nI0511 00:08:38.497520 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/rzh 224\nI0511 00:08:38.697480 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/95g 385\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 11 00:08:38.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4430' May 11 00:08:45.241: INFO: stderr: "" May 11 00:08:45.241: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:08:45.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4430" for this suite. • [SLOW TEST:13.907 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":81,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:08:45.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 11 00:08:45.301: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:09:02.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5507" for this suite. • [SLOW TEST:17.095 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":82,"skipped":1450,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:09:02.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-04eac0b6-0163-4ec7-ac02-6c10c2a02199 in namespace container-probe-6403 May 11 00:09:06.483: INFO: Started pod busybox-04eac0b6-0163-4ec7-ac02-6c10c2a02199 in namespace container-probe-6403 STEP: checking the pod's current state and verifying that restartCount is present May 11 00:09:06.487: INFO: Initial restart count of pod busybox-04eac0b6-0163-4ec7-ac02-6c10c2a02199 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:07.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6403" for this suite. • [SLOW TEST:244.933 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":83,"skipped":1462,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:07.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:13:07.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192" in namespace "downward-api-654" to be "Succeeded or Failed" May 11 00:13:07.421: INFO: Pod "downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192": Phase="Pending", Reason="", readiness=false. Elapsed: 33.391524ms May 11 00:13:09.431: INFO: Pod "downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043102042s May 11 00:13:11.447: INFO: Pod "downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058699469s May 11 00:13:13.451: INFO: Pod "downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062740152s STEP: Saw pod success May 11 00:13:13.451: INFO: Pod "downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192" satisfied condition "Succeeded or Failed" May 11 00:13:13.454: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192 container client-container: STEP: delete the pod May 11 00:13:13.557: INFO: Waiting for pod downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192 to disappear May 11 00:13:13.584: INFO: Pod downwardapi-volume-1643ba8b-4171-4a28-a603-d9a2740af192 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:13.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-654" for this suite. • [SLOW TEST:6.317 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":84,"skipped":1467,"failed":0} [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:13.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7927.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7927.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7927.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7927.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7927.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7927.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:13:19.715: INFO: DNS probes using dns-7927/dns-test-68eb09a3-962d-4066-8e2a-abea82e24e62 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:19.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7927" for this suite. • [SLOW TEST:6.225 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":85,"skipped":1467,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:19.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3211.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3211.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:13:26.502: INFO: DNS probes using dns-3211/dns-test-c5f9380d-efb6-45f8-b0de-6bb995f4fe7b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:26.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3211" for this suite. • [SLOW TEST:6.815 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":86,"skipped":1474,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:26.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 11 00:13:27.176: INFO: >>> kubeConfig: /root/.kube/config May 11 00:13:30.105: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:40.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5193" for this suite. • [SLOW TEST:14.193 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":87,"skipped":1474,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:40.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 11 00:13:40.883: INFO: Waiting up to 5m0s for pod "client-containers-06301a02-c3af-4e94-9245-064ce01d6bab" in namespace "containers-9559" to be "Succeeded or Failed" May 11 00:13:40.914: INFO: Pod "client-containers-06301a02-c3af-4e94-9245-064ce01d6bab": Phase="Pending", Reason="", readiness=false. Elapsed: 30.441971ms May 11 00:13:42.919: INFO: Pod "client-containers-06301a02-c3af-4e94-9245-064ce01d6bab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03550951s May 11 00:13:44.922: INFO: Pod "client-containers-06301a02-c3af-4e94-9245-064ce01d6bab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038697031s STEP: Saw pod success May 11 00:13:44.922: INFO: Pod "client-containers-06301a02-c3af-4e94-9245-064ce01d6bab" satisfied condition "Succeeded or Failed" May 11 00:13:44.924: INFO: Trying to get logs from node latest-worker2 pod client-containers-06301a02-c3af-4e94-9245-064ce01d6bab container test-container: STEP: delete the pod May 11 00:13:45.016: INFO: Waiting for pod client-containers-06301a02-c3af-4e94-9245-064ce01d6bab to disappear May 11 00:13:45.052: INFO: Pod client-containers-06301a02-c3af-4e94-9245-064ce01d6bab no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:45.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9559" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1479,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:45.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:45.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4493" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":89,"skipped":1484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:45.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-9f1bd22a-566f-4f2d-b026-cd32560eeefb STEP: Creating a pod to test consume secrets May 11 00:13:45.259: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812" in namespace "projected-8235" to be "Succeeded or Failed" May 11 00:13:45.262: INFO: Pod "pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855345ms May 11 00:13:47.266: INFO: Pod "pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006828966s May 11 00:13:49.270: INFO: Pod "pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011161277s STEP: Saw pod success May 11 00:13:49.270: INFO: Pod "pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812" satisfied condition "Succeeded or Failed" May 11 00:13:49.274: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812 container projected-secret-volume-test: STEP: delete the pod May 11 00:13:49.317: INFO: Waiting for pod pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812 to disappear May 11 00:13:49.331: INFO: Pod pod-projected-secrets-0615e745-2486-4ac6-aeab-b39d66a5b812 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:49.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8235" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1512,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:49.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:13:49.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25" in namespace "projected-7030" to be "Succeeded or Failed" May 11 00:13:49.507: INFO: Pod "downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25": Phase="Pending", Reason="", readiness=false. Elapsed: 41.285192ms May 11 00:13:51.511: INFO: Pod "downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045336942s May 11 00:13:53.515: INFO: Pod "downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04967018s STEP: Saw pod success May 11 00:13:53.515: INFO: Pod "downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25" satisfied condition "Succeeded or Failed" May 11 00:13:53.518: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25 container client-container: STEP: delete the pod May 11 00:13:53.579: INFO: Waiting for pod downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25 to disappear May 11 00:13:53.681: INFO: Pod downwardapi-volume-b688be12-49eb-4e5c-92ee-4fd417842f25 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:13:53.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7030" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1512,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:13:53.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:13:54.852: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:13:56.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:13:58.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724752834, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:14:01.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:14:02.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6739" for this suite. STEP: Destroying namespace "webhook-6739-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.010 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":92,"skipped":1518,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:14:02.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:14:02.762: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912" in namespace "downward-api-710" to be "Succeeded or Failed" May 11 00:14:02.776: INFO: Pod "downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912": Phase="Pending", Reason="", readiness=false. Elapsed: 14.385363ms May 11 00:14:04.780: INFO: Pod "downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018556276s May 11 00:14:06.784: INFO: Pod "downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022654203s STEP: Saw pod success May 11 00:14:06.784: INFO: Pod "downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912" satisfied condition "Succeeded or Failed" May 11 00:14:06.787: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912 container client-container: STEP: delete the pod May 11 00:14:06.843: INFO: Waiting for pod downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912 to disappear May 11 00:14:06.854: INFO: Pod downwardapi-volume-7e930ca7-7fec-43f4-9419-62fcc096d912 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:14:06.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-710" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1524,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:14:06.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:14:06.972: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739" in namespace "downward-api-3372" to be "Succeeded or Failed" May 11 00:14:06.994: INFO: Pod "downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739": Phase="Pending", Reason="", readiness=false. Elapsed: 22.207299ms May 11 00:14:08.998: INFO: Pod "downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025804171s May 11 00:14:11.002: INFO: Pod "downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030055356s STEP: Saw pod success May 11 00:14:11.002: INFO: Pod "downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739" satisfied condition "Succeeded or Failed" May 11 00:14:11.006: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739 container client-container: STEP: delete the pod May 11 00:14:11.041: INFO: Waiting for pod downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739 to disappear May 11 00:14:11.054: INFO: Pod downwardapi-volume-bbcfe2a8-7e1c-4076-9fc6-29f4c6f66739 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:14:11.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3372" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":94,"skipped":1528,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:14:11.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1510 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1510 STEP: creating replication controller externalsvc in namespace services-1510 I0511 00:14:11.264416 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1510, replica count: 2 I0511 00:14:14.314812 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:14:17.315056 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 11 00:14:17.388: INFO: Creating new exec pod May 11 00:14:21.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1510 execpods6f4p -- /bin/sh -x -c nslookup clusterip-service' May 11 00:14:24.145: INFO: stderr: "I0511 00:14:24.042469 1238 log.go:172] (0xc000b108f0) (0xc0005c0c80) Create stream\nI0511 00:14:24.042509 1238 log.go:172] (0xc000b108f0) (0xc0005c0c80) Stream added, broadcasting: 1\nI0511 00:14:24.045238 1238 log.go:172] (0xc000b108f0) Reply frame received for 1\nI0511 00:14:24.045300 1238 log.go:172] (0xc000b108f0) (0xc0005b8500) Create stream\nI0511 00:14:24.045311 1238 log.go:172] (0xc000b108f0) (0xc0005b8500) Stream added, broadcasting: 3\nI0511 00:14:24.046275 1238 log.go:172] (0xc000b108f0) Reply frame received for 3\nI0511 00:14:24.046324 1238 log.go:172] (0xc000b108f0) (0xc0005b8dc0) Create stream\nI0511 00:14:24.046332 1238 log.go:172] (0xc000b108f0) (0xc0005b8dc0) Stream added, broadcasting: 5\nI0511 00:14:24.047182 1238 log.go:172] (0xc000b108f0) Reply frame received for 5\nI0511 00:14:24.126861 1238 log.go:172] (0xc000b108f0) Data frame received for 5\nI0511 00:14:24.126890 1238 log.go:172] (0xc0005b8dc0) (5) Data frame handling\nI0511 00:14:24.126913 1238 log.go:172] (0xc0005b8dc0) (5) Data frame sent\n+ nslookup clusterip-service\nI0511 00:14:24.137453 1238 log.go:172] (0xc000b108f0) Data frame received for 3\nI0511 00:14:24.137486 1238 log.go:172] (0xc0005b8500) (3) Data frame handling\nI0511 00:14:24.137502 1238 log.go:172] (0xc0005b8500) (3) Data frame sent\nI0511 00:14:24.138527 1238 log.go:172] (0xc000b108f0) Data frame received for 3\nI0511 00:14:24.138545 1238 log.go:172] (0xc0005b8500) (3) Data frame handling\nI0511 00:14:24.138559 1238 log.go:172] (0xc0005b8500) (3) Data frame sent\nI0511 00:14:24.139057 1238 log.go:172] (0xc000b108f0) Data frame received for 3\nI0511 00:14:24.139080 1238 log.go:172] (0xc0005b8500) (3) Data frame handling\nI0511 00:14:24.139342 1238 log.go:172] (0xc000b108f0) Data frame received for 5\nI0511 00:14:24.139364 1238 log.go:172] (0xc0005b8dc0) (5) Data frame handling\nI0511 00:14:24.140844 1238 log.go:172] (0xc000b108f0) Data frame received for 1\nI0511 00:14:24.140882 1238 log.go:172] (0xc0005c0c80) (1) Data frame handling\nI0511 00:14:24.140905 1238 log.go:172] (0xc0005c0c80) (1) Data frame sent\nI0511 00:14:24.140938 1238 log.go:172] (0xc000b108f0) (0xc0005c0c80) Stream removed, broadcasting: 1\nI0511 00:14:24.140969 1238 log.go:172] (0xc000b108f0) Go away received\nI0511 00:14:24.141425 1238 log.go:172] (0xc000b108f0) (0xc0005c0c80) Stream removed, broadcasting: 1\nI0511 00:14:24.141443 1238 log.go:172] (0xc000b108f0) (0xc0005b8500) Stream removed, broadcasting: 3\nI0511 00:14:24.141450 1238 log.go:172] (0xc000b108f0) (0xc0005b8dc0) Stream removed, broadcasting: 5\n" May 11 00:14:24.145: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1510.svc.cluster.local\tcanonical name = externalsvc.services-1510.svc.cluster.local.\nName:\texternalsvc.services-1510.svc.cluster.local\nAddress: 10.111.169.203\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1510, will wait for the garbage collector to delete the pods May 11 00:14:24.206: INFO: Deleting ReplicationController externalsvc took: 6.944382ms May 11 00:14:24.306: INFO: Terminating ReplicationController externalsvc pods took: 100.19298ms May 11 00:14:35.335: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:14:35.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1510" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.304 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":95,"skipped":1538,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:14:35.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5736 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 00:14:35.475: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 00:14:35.592: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 00:14:37.682: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 00:14:39.597: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 00:14:41.596: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:43.597: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:45.597: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:47.597: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:49.596: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:51.597: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:53.597: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:55.597: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:57.597: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:14:59.596: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 00:14:59.601: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 00:15:03.630: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.66:8080/dial?request=hostname&protocol=http&host=10.244.1.65&port=8080&tries=1'] Namespace:pod-network-test-5736 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:15:03.630: INFO: >>> kubeConfig: /root/.kube/config I0511 00:15:03.670314 7 log.go:172] (0xc0029bdc30) (0xc002a340a0) Create stream I0511 00:15:03.670359 7 log.go:172] (0xc0029bdc30) (0xc002a340a0) Stream added, broadcasting: 1 I0511 00:15:03.673665 7 log.go:172] (0xc0029bdc30) Reply frame received for 1 I0511 00:15:03.673722 7 log.go:172] (0xc0029bdc30) (0xc002a34140) Create stream I0511 00:15:03.673744 7 log.go:172] (0xc0029bdc30) (0xc002a34140) Stream added, broadcasting: 3 I0511 00:15:03.675679 7 log.go:172] (0xc0029bdc30) Reply frame received for 3 I0511 00:15:03.675725 7 log.go:172] (0xc0029bdc30) (0xc002a34280) Create stream I0511 00:15:03.675759 7 log.go:172] (0xc0029bdc30) (0xc002a34280) Stream added, broadcasting: 5 I0511 00:15:03.678372 7 log.go:172] (0xc0029bdc30) Reply frame received for 5 I0511 00:15:03.762707 7 log.go:172] (0xc0029bdc30) Data frame received for 3 I0511 00:15:03.762748 7 log.go:172] (0xc002a34140) (3) Data frame handling I0511 00:15:03.762778 7 log.go:172] (0xc002a34140) (3) Data frame sent I0511 00:15:03.763044 7 log.go:172] (0xc0029bdc30) Data frame received for 3 I0511 00:15:03.763085 7 log.go:172] (0xc002a34140) (3) Data frame handling I0511 00:15:03.763114 7 log.go:172] (0xc0029bdc30) Data frame received for 5 I0511 00:15:03.763136 7 log.go:172] (0xc002a34280) (5) Data frame handling I0511 00:15:03.764812 7 log.go:172] (0xc0029bdc30) Data frame received for 1 I0511 00:15:03.764837 7 log.go:172] (0xc002a340a0) (1) Data frame handling I0511 00:15:03.764855 7 log.go:172] (0xc002a340a0) (1) Data frame sent I0511 00:15:03.764873 7 log.go:172] (0xc0029bdc30) (0xc002a340a0) Stream removed, broadcasting: 1 I0511 00:15:03.764893 7 log.go:172] (0xc0029bdc30) Go away received I0511 00:15:03.765039 7 log.go:172] (0xc0029bdc30) (0xc002a340a0) Stream removed, broadcasting: 1 I0511 00:15:03.765076 7 log.go:172] (0xc0029bdc30) (0xc002a34140) Stream removed, broadcasting: 3 I0511 00:15:03.765101 7 log.go:172] (0xc0029bdc30) (0xc002a34280) Stream removed, broadcasting: 5 May 11 00:15:03.765: INFO: Waiting for responses: map[] May 11 00:15:03.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.66:8080/dial?request=hostname&protocol=http&host=10.244.2.146&port=8080&tries=1'] Namespace:pod-network-test-5736 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:15:03.769: INFO: >>> kubeConfig: /root/.kube/config I0511 00:15:03.798157 7 log.go:172] (0xc00234a6e0) (0xc002526d20) Create stream I0511 00:15:03.798196 7 log.go:172] (0xc00234a6e0) (0xc002526d20) Stream added, broadcasting: 1 I0511 00:15:03.800349 7 log.go:172] (0xc00234a6e0) Reply frame received for 1 I0511 00:15:03.800376 7 log.go:172] (0xc00234a6e0) (0xc002526dc0) Create stream I0511 00:15:03.800384 7 log.go:172] (0xc00234a6e0) (0xc002526dc0) Stream added, broadcasting: 3 I0511 00:15:03.801383 7 log.go:172] (0xc00234a6e0) Reply frame received for 3 I0511 00:15:03.801422 7 log.go:172] (0xc00234a6e0) (0xc002b28aa0) Create stream I0511 00:15:03.801443 7 log.go:172] (0xc00234a6e0) (0xc002b28aa0) Stream added, broadcasting: 5 I0511 00:15:03.802630 7 log.go:172] (0xc00234a6e0) Reply frame received for 5 I0511 00:15:03.855738 7 log.go:172] (0xc00234a6e0) Data frame received for 3 I0511 00:15:03.855778 7 log.go:172] (0xc002526dc0) (3) Data frame handling I0511 00:15:03.855810 7 log.go:172] (0xc002526dc0) (3) Data frame sent I0511 00:15:03.856205 7 log.go:172] (0xc00234a6e0) Data frame received for 5 I0511 00:15:03.856219 7 log.go:172] (0xc002b28aa0) (5) Data frame handling I0511 00:15:03.856470 7 log.go:172] (0xc00234a6e0) Data frame received for 3 I0511 00:15:03.856504 7 log.go:172] (0xc002526dc0) (3) Data frame handling I0511 00:15:03.858260 7 log.go:172] (0xc00234a6e0) Data frame received for 1 I0511 00:15:03.858276 7 log.go:172] (0xc002526d20) (1) Data frame handling I0511 00:15:03.858285 7 log.go:172] (0xc002526d20) (1) Data frame sent I0511 00:15:03.858300 7 log.go:172] (0xc00234a6e0) (0xc002526d20) Stream removed, broadcasting: 1 I0511 00:15:03.858350 7 log.go:172] (0xc00234a6e0) Go away received I0511 00:15:03.858440 7 log.go:172] (0xc00234a6e0) (0xc002526d20) Stream removed, broadcasting: 1 I0511 00:15:03.858455 7 log.go:172] (0xc00234a6e0) (0xc002526dc0) Stream removed, broadcasting: 3 I0511 00:15:03.858461 7 log.go:172] (0xc00234a6e0) (0xc002b28aa0) Stream removed, broadcasting: 5 May 11 00:15:03.858: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:15:03.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5736" for this suite. • [SLOW TEST:28.501 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1543,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:15:03.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 11 00:15:03.940: INFO: >>> kubeConfig: /root/.kube/config May 11 00:15:06.875: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:15:18.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8931" for this suite. • [SLOW TEST:14.780 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":97,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:15:18.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:15:18.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2240' May 11 00:15:19.050: INFO: stderr: "" May 11 00:15:19.050: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 11 00:15:19.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2240' May 11 00:15:19.307: INFO: stderr: "" May 11 00:15:19.307: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 00:15:20.311: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:15:20.311: INFO: Found 0 / 1 May 11 00:15:21.312: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:15:21.312: INFO: Found 0 / 1 May 11 00:15:22.311: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:15:22.312: INFO: Found 0 / 1 May 11 00:15:23.311: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:15:23.312: INFO: Found 1 / 1 May 11 00:15:23.312: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 00:15:23.315: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:15:23.315: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 00:15:23.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-tbmrj --namespace=kubectl-2240' May 11 00:15:23.443: INFO: stderr: "" May 11 00:15:23.443: INFO: stdout: "Name: agnhost-master-tbmrj\nNamespace: kubectl-2240\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Mon, 11 May 2020 00:15:19 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.147\nIPs:\n IP: 10.244.2.147\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://133e960fa2991150ad75e81a43dd0f6dd9e0a63d7d42c218cef1ca824b9b0803\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 00:15:22 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-gr8qp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-gr8qp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-gr8qp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2240/agnhost-master-tbmrj to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" May 11 00:15:23.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2240' May 11 00:15:23.565: INFO: stderr: "" May 11 00:15:23.565: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2240\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-tbmrj\n" May 11 00:15:23.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2240' May 11 00:15:23.705: INFO: stderr: "" May 11 00:15:23.705: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2240\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.111.66.194\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.147:6379\nSession Affinity: None\nEvents: \n" May 11 00:15:23.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 11 00:15:23.838: INFO: stderr: "" May 11 00:15:23.839: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 11 May 2020 00:15:18 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 00:12:44 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 00:12:44 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 00:12:44 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 00:12:44 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 11d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 11d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 11d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 11d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 11 00:15:23.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-2240' May 11 00:15:23.938: INFO: stderr: "" May 11 00:15:23.938: INFO: stdout: "Name: kubectl-2240\nLabels: e2e-framework=kubectl\n e2e-run=c8dce0b2-f676-4ad6-a374-f2233401cc47\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:15:23.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2240" for this suite. • [SLOW TEST:5.296 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":98,"skipped":1579,"failed":0} [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:15:23.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7 STEP: updating the pod May 11 00:15:32.525: INFO: Successfully updated pod "var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7" STEP: waiting for pod and container restart STEP: Failing liveness probe May 11 00:15:32.545: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-1389 PodName:var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:15:32.545: INFO: >>> kubeConfig: /root/.kube/config I0511 00:15:32.579305 7 log.go:172] (0xc005ebe370) (0xc0028365a0) Create stream I0511 00:15:32.579332 7 log.go:172] (0xc005ebe370) (0xc0028365a0) Stream added, broadcasting: 1 I0511 00:15:32.582078 7 log.go:172] (0xc005ebe370) Reply frame received for 1 I0511 00:15:32.582128 7 log.go:172] (0xc005ebe370) (0xc002082f00) Create stream I0511 00:15:32.582146 7 log.go:172] (0xc005ebe370) (0xc002082f00) Stream added, broadcasting: 3 I0511 00:15:32.583071 7 log.go:172] (0xc005ebe370) Reply frame received for 3 I0511 00:15:32.583139 7 log.go:172] (0xc005ebe370) (0xc0025e3860) Create stream I0511 00:15:32.583161 7 log.go:172] (0xc005ebe370) (0xc0025e3860) Stream added, broadcasting: 5 I0511 00:15:32.584075 7 log.go:172] (0xc005ebe370) Reply frame received for 5 I0511 00:15:32.647141 7 log.go:172] (0xc005ebe370) Data frame received for 5 I0511 00:15:32.647196 7 log.go:172] (0xc0025e3860) (5) Data frame handling I0511 00:15:32.647271 7 log.go:172] (0xc005ebe370) Data frame received for 3 I0511 00:15:32.647296 7 log.go:172] (0xc002082f00) (3) Data frame handling I0511 00:15:32.648923 7 log.go:172] (0xc005ebe370) Data frame received for 1 I0511 00:15:32.648941 7 log.go:172] (0xc0028365a0) (1) Data frame handling I0511 00:15:32.648951 7 log.go:172] (0xc0028365a0) (1) Data frame sent I0511 00:15:32.648963 7 log.go:172] (0xc005ebe370) (0xc0028365a0) Stream removed, broadcasting: 1 I0511 00:15:32.649086 7 log.go:172] (0xc005ebe370) (0xc0028365a0) Stream removed, broadcasting: 1 I0511 00:15:32.649350 7 log.go:172] (0xc005ebe370) Go away received I0511 00:15:32.649437 7 log.go:172] (0xc005ebe370) (0xc002082f00) Stream removed, broadcasting: 3 I0511 00:15:32.649480 7 log.go:172] (0xc005ebe370) (0xc0025e3860) Stream removed, broadcasting: 5 May 11 00:15:32.649: INFO: Pod exec output: / STEP: Waiting for container to restart May 11 00:15:32.653: INFO: Container dapi-container, restarts: 0 May 11 00:15:42.658: INFO: Container dapi-container, restarts: 0 May 11 00:15:52.658: INFO: Container dapi-container, restarts: 0 May 11 00:16:02.658: INFO: Container dapi-container, restarts: 0 May 11 00:16:12.665: INFO: Container dapi-container, restarts: 1 May 11 00:16:12.665: INFO: Container has restart count: 1 STEP: Rewriting the file May 11 00:16:12.665: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-1389 PodName:var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:16:12.665: INFO: >>> kubeConfig: /root/.kube/config I0511 00:16:12.695849 7 log.go:172] (0xc0029bcfd0) (0xc0029c4320) Create stream I0511 00:16:12.695876 7 log.go:172] (0xc0029bcfd0) (0xc0029c4320) Stream added, broadcasting: 1 I0511 00:16:12.697929 7 log.go:172] (0xc0029bcfd0) Reply frame received for 1 I0511 00:16:12.697967 7 log.go:172] (0xc0029bcfd0) (0xc0025c8140) Create stream I0511 00:16:12.697980 7 log.go:172] (0xc0029bcfd0) (0xc0025c8140) Stream added, broadcasting: 3 I0511 00:16:12.699027 7 log.go:172] (0xc0029bcfd0) Reply frame received for 3 I0511 00:16:12.699069 7 log.go:172] (0xc0029bcfd0) (0xc0029c43c0) Create stream I0511 00:16:12.699088 7 log.go:172] (0xc0029bcfd0) (0xc0029c43c0) Stream added, broadcasting: 5 I0511 00:16:12.700105 7 log.go:172] (0xc0029bcfd0) Reply frame received for 5 I0511 00:16:12.790280 7 log.go:172] (0xc0029bcfd0) Data frame received for 5 I0511 00:16:12.790322 7 log.go:172] (0xc0029c43c0) (5) Data frame handling I0511 00:16:12.790489 7 log.go:172] (0xc0029bcfd0) Data frame received for 3 I0511 00:16:12.790524 7 log.go:172] (0xc0025c8140) (3) Data frame handling I0511 00:16:12.792327 7 log.go:172] (0xc0029bcfd0) Data frame received for 1 I0511 00:16:12.792359 7 log.go:172] (0xc0029c4320) (1) Data frame handling I0511 00:16:12.792389 7 log.go:172] (0xc0029c4320) (1) Data frame sent I0511 00:16:12.792401 7 log.go:172] (0xc0029bcfd0) (0xc0029c4320) Stream removed, broadcasting: 1 I0511 00:16:12.792464 7 log.go:172] (0xc0029bcfd0) Go away received I0511 00:16:12.792506 7 log.go:172] (0xc0029bcfd0) (0xc0029c4320) Stream removed, broadcasting: 1 I0511 00:16:12.792532 7 log.go:172] (0xc0029bcfd0) (0xc0025c8140) Stream removed, broadcasting: 3 I0511 00:16:12.792548 7 log.go:172] (0xc0029bcfd0) (0xc0029c43c0) Stream removed, broadcasting: 5 May 11 00:16:12.792: INFO: Exec stderr: "" May 11 00:16:12.792: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 11 00:16:40.832: INFO: Container has restart count: 2 May 11 00:17:42.800: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 11 00:17:42.804: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-1389 PodName:var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:17:42.804: INFO: >>> kubeConfig: /root/.kube/config I0511 00:17:42.838559 7 log.go:172] (0xc0029bcc60) (0xc00225f5e0) Create stream I0511 00:17:42.838609 7 log.go:172] (0xc0029bcc60) (0xc00225f5e0) Stream added, broadcasting: 1 I0511 00:17:42.840255 7 log.go:172] (0xc0029bcc60) Reply frame received for 1 I0511 00:17:42.840284 7 log.go:172] (0xc0029bcc60) (0xc0029c4000) Create stream I0511 00:17:42.840294 7 log.go:172] (0xc0029bcc60) (0xc0029c4000) Stream added, broadcasting: 3 I0511 00:17:42.841393 7 log.go:172] (0xc0029bcc60) Reply frame received for 3 I0511 00:17:42.841439 7 log.go:172] (0xc0029bcc60) (0xc00225f680) Create stream I0511 00:17:42.841464 7 log.go:172] (0xc0029bcc60) (0xc00225f680) Stream added, broadcasting: 5 I0511 00:17:42.842263 7 log.go:172] (0xc0029bcc60) Reply frame received for 5 I0511 00:17:42.919395 7 log.go:172] (0xc0029bcc60) Data frame received for 5 I0511 00:17:42.919426 7 log.go:172] (0xc00225f680) (5) Data frame handling I0511 00:17:42.919461 7 log.go:172] (0xc0029bcc60) Data frame received for 3 I0511 00:17:42.919498 7 log.go:172] (0xc0029c4000) (3) Data frame handling I0511 00:17:42.921046 7 log.go:172] (0xc0029bcc60) Data frame received for 1 I0511 00:17:42.921069 7 log.go:172] (0xc00225f5e0) (1) Data frame handling I0511 00:17:42.921090 7 log.go:172] (0xc00225f5e0) (1) Data frame sent I0511 00:17:42.921485 7 log.go:172] (0xc0029bcc60) (0xc00225f5e0) Stream removed, broadcasting: 1 I0511 00:17:42.921542 7 log.go:172] (0xc0029bcc60) Go away received I0511 00:17:42.921570 7 log.go:172] (0xc0029bcc60) (0xc00225f5e0) Stream removed, broadcasting: 1 I0511 00:17:42.921598 7 log.go:172] (0xc0029bcc60) (0xc0029c4000) Stream removed, broadcasting: 3 I0511 00:17:42.921616 7 log.go:172] (0xc0029bcc60) (0xc00225f680) Stream removed, broadcasting: 5 May 11 00:17:42.925: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-1389 PodName:var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:17:42.926: INFO: >>> kubeConfig: /root/.kube/config I0511 00:17:42.960880 7 log.go:172] (0xc005ebe370) (0xc0025c8500) Create stream I0511 00:17:42.960912 7 log.go:172] (0xc005ebe370) (0xc0025c8500) Stream added, broadcasting: 1 I0511 00:17:42.962832 7 log.go:172] (0xc005ebe370) Reply frame received for 1 I0511 00:17:42.962874 7 log.go:172] (0xc005ebe370) (0xc002c320a0) Create stream I0511 00:17:42.962891 7 log.go:172] (0xc005ebe370) (0xc002c320a0) Stream added, broadcasting: 3 I0511 00:17:42.964142 7 log.go:172] (0xc005ebe370) Reply frame received for 3 I0511 00:17:42.964186 7 log.go:172] (0xc005ebe370) (0xc002c32140) Create stream I0511 00:17:42.964212 7 log.go:172] (0xc005ebe370) (0xc002c32140) Stream added, broadcasting: 5 I0511 00:17:42.965361 7 log.go:172] (0xc005ebe370) Reply frame received for 5 I0511 00:17:43.030372 7 log.go:172] (0xc005ebe370) Data frame received for 5 I0511 00:17:43.030403 7 log.go:172] (0xc002c32140) (5) Data frame handling I0511 00:17:43.030453 7 log.go:172] (0xc005ebe370) Data frame received for 3 I0511 00:17:43.030483 7 log.go:172] (0xc002c320a0) (3) Data frame handling I0511 00:17:43.032699 7 log.go:172] (0xc005ebe370) Data frame received for 1 I0511 00:17:43.032741 7 log.go:172] (0xc0025c8500) (1) Data frame handling I0511 00:17:43.032792 7 log.go:172] (0xc0025c8500) (1) Data frame sent I0511 00:17:43.032836 7 log.go:172] (0xc005ebe370) (0xc0025c8500) Stream removed, broadcasting: 1 I0511 00:17:43.032950 7 log.go:172] (0xc005ebe370) (0xc0025c8500) Stream removed, broadcasting: 1 I0511 00:17:43.032990 7 log.go:172] (0xc005ebe370) (0xc002c320a0) Stream removed, broadcasting: 3 I0511 00:17:43.033010 7 log.go:172] (0xc005ebe370) (0xc002c32140) Stream removed, broadcasting: 5 May 11 00:17:43.033: INFO: Deleting pod "var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7" in namespace "var-expansion-1389" I0511 00:17:43.033069 7 log.go:172] (0xc005ebe370) Go away received May 11 00:17:43.039: INFO: Wait up to 5m0s for pod "var-expansion-b6100ef1-ac60-4cd0-9028-977e9fcb0fe7" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:18:25.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1389" for this suite. • [SLOW TEST:181.132 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":99,"skipped":1579,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:18:25.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-27ddf5e7-bf64-46f0-9675-f0bac1cb83d0 STEP: Creating a pod to test consume secrets May 11 00:18:25.254: INFO: Waiting up to 5m0s for pod "pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b" in namespace "secrets-566" to be "Succeeded or Failed" May 11 00:18:25.294: INFO: Pod "pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.6793ms May 11 00:18:27.360: INFO: Pod "pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105929286s May 11 00:18:29.364: INFO: Pod "pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110388007s STEP: Saw pod success May 11 00:18:29.364: INFO: Pod "pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b" satisfied condition "Succeeded or Failed" May 11 00:18:29.368: INFO: Trying to get logs from node latest-worker pod pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b container secret-volume-test: STEP: delete the pod May 11 00:18:29.416: INFO: Waiting for pod pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b to disappear May 11 00:18:29.425: INFO: Pod pod-secrets-efc65a13-1e43-4934-8c0e-048ad27b9c1b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:18:29.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-566" for this suite. STEP: Destroying namespace "secret-namespace-3380" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1589,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:18:29.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:18:29.527: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Pending, waiting for it to be Running (with Ready = true) May 11 00:18:31.531: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Pending, waiting for it to be Running (with Ready = true) May 11 00:18:33.532: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:35.531: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:37.530: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:39.531: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:41.532: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:43.531: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:45.532: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:47.539: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:49.531: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:51.530: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:53.581: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = false) May 11 00:18:55.531: INFO: The status of Pod test-webserver-274bd865-265c-47b2-b14c-f253d030a3c0 is Running (Ready = true) May 11 00:18:55.534: INFO: Container started at 2020-05-11 00:18:31 +0000 UTC, pod became ready at 2020-05-11 00:18:54 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:18:55.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7528" for this suite. • [SLOW TEST:26.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":101,"skipped":1595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:18:55.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:18:55.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef" in namespace "projected-438" to be "Succeeded or Failed" May 11 00:18:55.744: INFO: Pod "downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef": Phase="Pending", Reason="", readiness=false. Elapsed: 52.195459ms May 11 00:18:57.791: INFO: Pod "downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099581732s May 11 00:18:59.830: INFO: Pod "downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137807026s STEP: Saw pod success May 11 00:18:59.830: INFO: Pod "downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef" satisfied condition "Succeeded or Failed" May 11 00:18:59.834: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef container client-container: STEP: delete the pod May 11 00:18:59.955: INFO: Waiting for pod downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef to disappear May 11 00:18:59.965: INFO: Pod downwardapi-volume-e688aca1-7a66-4197-be3d-b4f7e4b44aef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:18:59.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-438" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1627,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:18:59.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:19:04.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7877" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1629,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:19:04.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:19:11.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3935" for this suite. • [SLOW TEST:7.192 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":104,"skipped":1635,"failed":0} [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:19:11.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4314, will wait for the garbage collector to delete the pods May 11 00:19:15.540: INFO: Deleting Job.batch foo took: 6.850179ms May 11 00:19:15.840: INFO: Terminating Job.batch foo pods took: 300.222515ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:19:55.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4314" for this suite. • [SLOW TEST:43.943 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":105,"skipped":1635,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:19:55.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 11 00:19:55.929: INFO: created pod pod-service-account-defaultsa May 11 00:19:55.929: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 00:19:55.951: INFO: created pod pod-service-account-mountsa May 11 00:19:55.951: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 00:19:55.967: INFO: created pod pod-service-account-nomountsa May 11 00:19:55.967: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 00:19:55.992: INFO: created pod pod-service-account-defaultsa-mountspec May 11 00:19:55.992: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 00:19:56.067: INFO: created pod pod-service-account-mountsa-mountspec May 11 00:19:56.067: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 00:19:56.096: INFO: created pod pod-service-account-nomountsa-mountspec May 11 00:19:56.096: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 00:19:56.139: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 00:19:56.139: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 00:19:56.200: INFO: created pod pod-service-account-mountsa-nomountspec May 11 00:19:56.200: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 00:19:56.228: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 00:19:56.228: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:19:56.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1387" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":106,"skipped":1654,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:19:56.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:19:56.477: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:20:09.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-551" for this suite. • [SLOW TEST:12.834 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":107,"skipped":1655,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:20:09.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 00:20:10.898945 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 00:20:10.899: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:20:10.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1848" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":108,"skipped":1666,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:20:10.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 11 00:20:11.011: INFO: Waiting up to 5m0s for pod "client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb" in namespace "containers-8097" to be "Succeeded or Failed" May 11 00:20:11.020: INFO: Pod "client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.73443ms May 11 00:20:13.025: INFO: Pod "client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014496106s May 11 00:20:15.028: INFO: Pod "client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.017821366s May 11 00:20:17.079: INFO: Pod "client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068209795s STEP: Saw pod success May 11 00:20:17.079: INFO: Pod "client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb" satisfied condition "Succeeded or Failed" May 11 00:20:17.081: INFO: Trying to get logs from node latest-worker2 pod client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb container test-container: STEP: delete the pod May 11 00:20:17.114: INFO: Waiting for pod client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb to disappear May 11 00:20:17.129: INFO: Pod client-containers-e55b8787-2eb1-40ab-b5ad-fe1f27fef1bb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:20:17.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8097" for this suite. • [SLOW TEST:6.230 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1671,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:20:17.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3114 STEP: creating service affinity-clusterip-transition in namespace services-3114 STEP: creating replication controller affinity-clusterip-transition in namespace services-3114 I0511 00:20:17.594951 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3114, replica count: 3 I0511 00:20:20.645599 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:20:23.645865 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 00:20:23.653: INFO: Creating new exec pod May 11 00:20:28.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3114 execpod-affinitysgj5k -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 11 00:20:28.897: INFO: stderr: "I0511 00:20:28.806669 1417 log.go:172] (0xc000aa9550) (0xc0006e9180) Create stream\nI0511 00:20:28.806729 1417 log.go:172] (0xc000aa9550) (0xc0006e9180) Stream added, broadcasting: 1\nI0511 00:20:28.809076 1417 log.go:172] (0xc000aa9550) Reply frame received for 1\nI0511 00:20:28.809106 1417 log.go:172] (0xc000aa9550) (0xc0006e9720) Create stream\nI0511 00:20:28.809232 1417 log.go:172] (0xc000aa9550) (0xc0006e9720) Stream added, broadcasting: 3\nI0511 00:20:28.810091 1417 log.go:172] (0xc000aa9550) Reply frame received for 3\nI0511 00:20:28.810118 1417 log.go:172] (0xc000aa9550) (0xc0005dd040) Create stream\nI0511 00:20:28.810129 1417 log.go:172] (0xc000aa9550) (0xc0005dd040) Stream added, broadcasting: 5\nI0511 00:20:28.811026 1417 log.go:172] (0xc000aa9550) Reply frame received for 5\nI0511 00:20:28.888822 1417 log.go:172] (0xc000aa9550) Data frame received for 5\nI0511 00:20:28.888871 1417 log.go:172] (0xc0005dd040) (5) Data frame handling\nI0511 00:20:28.888894 1417 log.go:172] (0xc0005dd040) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0511 00:20:28.889074 1417 log.go:172] (0xc000aa9550) Data frame received for 5\nI0511 00:20:28.889106 1417 log.go:172] (0xc0005dd040) (5) Data frame handling\nI0511 00:20:28.889352 1417 log.go:172] (0xc0005dd040) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0511 00:20:28.889823 1417 log.go:172] (0xc000aa9550) Data frame received for 3\nI0511 00:20:28.889867 1417 log.go:172] (0xc0006e9720) (3) Data frame handling\nI0511 00:20:28.890182 1417 log.go:172] (0xc000aa9550) Data frame received for 5\nI0511 00:20:28.890219 1417 log.go:172] (0xc0005dd040) (5) Data frame handling\nI0511 00:20:28.891971 1417 log.go:172] (0xc000aa9550) Data frame received for 1\nI0511 00:20:28.892005 1417 log.go:172] (0xc0006e9180) (1) Data frame handling\nI0511 00:20:28.892028 1417 log.go:172] (0xc0006e9180) (1) Data frame sent\nI0511 00:20:28.892055 1417 log.go:172] (0xc000aa9550) (0xc0006e9180) Stream removed, broadcasting: 1\nI0511 00:20:28.892075 1417 log.go:172] (0xc000aa9550) Go away received\nI0511 00:20:28.892454 1417 log.go:172] (0xc000aa9550) (0xc0006e9180) Stream removed, broadcasting: 1\nI0511 00:20:28.892474 1417 log.go:172] (0xc000aa9550) (0xc0006e9720) Stream removed, broadcasting: 3\nI0511 00:20:28.892485 1417 log.go:172] (0xc000aa9550) (0xc0005dd040) Stream removed, broadcasting: 5\n" May 11 00:20:28.897: INFO: stdout: "" May 11 00:20:28.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3114 execpod-affinitysgj5k -- /bin/sh -x -c nc -zv -t -w 2 10.104.81.95 80' May 11 00:20:29.094: INFO: stderr: "I0511 00:20:29.036421 1437 log.go:172] (0xc000b2f290) (0xc000aea140) Create stream\nI0511 00:20:29.036482 1437 log.go:172] (0xc000b2f290) (0xc000aea140) Stream added, broadcasting: 1\nI0511 00:20:29.041396 1437 log.go:172] (0xc000b2f290) Reply frame received for 1\nI0511 00:20:29.041446 1437 log.go:172] (0xc000b2f290) (0xc000854dc0) Create stream\nI0511 00:20:29.041481 1437 log.go:172] (0xc000b2f290) (0xc000854dc0) Stream added, broadcasting: 3\nI0511 00:20:29.042465 1437 log.go:172] (0xc000b2f290) Reply frame received for 3\nI0511 00:20:29.042513 1437 log.go:172] (0xc000b2f290) (0xc000838be0) Create stream\nI0511 00:20:29.042529 1437 log.go:172] (0xc000b2f290) (0xc000838be0) Stream added, broadcasting: 5\nI0511 00:20:29.043359 1437 log.go:172] (0xc000b2f290) Reply frame received for 5\nI0511 00:20:29.089069 1437 log.go:172] (0xc000b2f290) Data frame received for 3\nI0511 00:20:29.089091 1437 log.go:172] (0xc000854dc0) (3) Data frame handling\nI0511 00:20:29.089231 1437 log.go:172] (0xc000b2f290) Data frame received for 5\nI0511 00:20:29.089250 1437 log.go:172] (0xc000838be0) (5) Data frame handling\nI0511 00:20:29.089260 1437 log.go:172] (0xc000838be0) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.81.95 80\nConnection to 10.104.81.95 80 port [tcp/http] succeeded!\nI0511 00:20:29.089474 1437 log.go:172] (0xc000b2f290) Data frame received for 5\nI0511 00:20:29.089495 1437 log.go:172] (0xc000838be0) (5) Data frame handling\nI0511 00:20:29.090652 1437 log.go:172] (0xc000b2f290) Data frame received for 1\nI0511 00:20:29.090672 1437 log.go:172] (0xc000aea140) (1) Data frame handling\nI0511 00:20:29.090684 1437 log.go:172] (0xc000aea140) (1) Data frame sent\nI0511 00:20:29.090803 1437 log.go:172] (0xc000b2f290) (0xc000aea140) Stream removed, broadcasting: 1\nI0511 00:20:29.090888 1437 log.go:172] (0xc000b2f290) Go away received\nI0511 00:20:29.091031 1437 log.go:172] (0xc000b2f290) (0xc000aea140) Stream removed, broadcasting: 1\nI0511 00:20:29.091043 1437 log.go:172] (0xc000b2f290) (0xc000854dc0) Stream removed, broadcasting: 3\nI0511 00:20:29.091049 1437 log.go:172] (0xc000b2f290) (0xc000838be0) Stream removed, broadcasting: 5\n" May 11 00:20:29.094: INFO: stdout: "" May 11 00:20:29.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3114 execpod-affinitysgj5k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.81.95:80/ ; done' May 11 00:20:29.467: INFO: stderr: "I0511 00:20:29.302509 1457 log.go:172] (0xc000604790) (0xc00015f180) Create stream\nI0511 00:20:29.302579 1457 log.go:172] (0xc000604790) (0xc00015f180) Stream added, broadcasting: 1\nI0511 00:20:29.305971 1457 log.go:172] (0xc000604790) Reply frame received for 1\nI0511 00:20:29.306000 1457 log.go:172] (0xc000604790) (0xc000360460) Create stream\nI0511 00:20:29.306009 1457 log.go:172] (0xc000604790) (0xc000360460) Stream added, broadcasting: 3\nI0511 00:20:29.306966 1457 log.go:172] (0xc000604790) Reply frame received for 3\nI0511 00:20:29.307009 1457 log.go:172] (0xc000604790) (0xc00025d040) Create stream\nI0511 00:20:29.307028 1457 log.go:172] (0xc000604790) (0xc00025d040) Stream added, broadcasting: 5\nI0511 00:20:29.307800 1457 log.go:172] (0xc000604790) Reply frame received for 5\nI0511 00:20:29.371192 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.371223 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.371232 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.371288 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.371355 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.371391 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.376873 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.376895 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.376914 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.377656 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.377674 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.377689 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.377697 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.377705 1457 log.go:172] (0xc00025d040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.377725 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.377793 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.377812 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.377825 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.381507 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.381528 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.381547 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.381802 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.381819 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.381835 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.381859 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.381874 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.381890 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.389926 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.389957 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.389991 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0511 00:20:29.390017 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.390041 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.390064 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.390085 1457 log.go:172] (0xc000604790) Data frame received for 3\n http://10.104.81.95:80/\nI0511 00:20:29.390109 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.390163 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.390192 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.390210 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.390269 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.394459 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.394478 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.394501 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.395030 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.395059 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.395081 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.395101 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.395113 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.395125 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.401471 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.401490 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.401503 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.402083 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.402097 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.402104 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.402108 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.402113 1457 log.go:172] (0xc00025d040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.402133 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.402145 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.402151 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.402156 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.406993 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.407019 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.407047 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.407527 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.407554 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.407568 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\nI0511 00:20:29.407589 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.407615 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.407627 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.407661 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.407674 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.407688 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.412393 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.412432 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.412472 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.412832 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.412856 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.412871 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.412904 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.412938 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.412957 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.420182 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.420200 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.420213 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.420961 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.420997 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.421040 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.421070 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.421101 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.421108 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.425327 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.425338 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.425344 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.425885 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.425918 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.425931 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.425946 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.425954 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.425972 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.430140 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.430152 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.430158 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.430589 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.430611 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.430636 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.430651 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.430669 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.430682 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.438199 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.438218 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.438226 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.438862 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.438880 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.438896 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.438905 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.438925 1457 log.go:172] (0xc00025d040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.438955 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.438980 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.438996 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.439037 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.442223 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.442256 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.442289 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.442462 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.442476 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.442498 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/I0511 00:20:29.442600 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.442618 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.442630 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.442650 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.442662 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.442672 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n\nI0511 00:20:29.446369 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.446414 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.446460 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.446579 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.446596 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.446606 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.446616 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.446638 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.446661 1457 log.go:172] (0xc00025d040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.450183 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.450202 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.450220 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.451079 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.451095 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.451101 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.451111 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.451116 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.451123 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.451129 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.451134 1457 log.go:172] (0xc00025d040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.451153 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.454578 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.454599 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.454620 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.454981 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.454993 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.455004 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.455010 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.455015 1457 log.go:172] (0xc00025d040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.455026 1457 log.go:172] (0xc00025d040) (5) Data frame sent\nI0511 00:20:29.455206 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.455222 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.455245 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.459077 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.459108 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.459127 1457 log.go:172] (0xc000360460) (3) Data frame sent\nI0511 00:20:29.459479 1457 log.go:172] (0xc000604790) Data frame received for 3\nI0511 00:20:29.459509 1457 log.go:172] (0xc000360460) (3) Data frame handling\nI0511 00:20:29.459539 1457 log.go:172] (0xc000604790) Data frame received for 5\nI0511 00:20:29.459553 1457 log.go:172] (0xc00025d040) (5) Data frame handling\nI0511 00:20:29.461896 1457 log.go:172] (0xc000604790) Data frame received for 1\nI0511 00:20:29.461958 1457 log.go:172] (0xc00015f180) (1) Data frame handling\nI0511 00:20:29.461985 1457 log.go:172] (0xc00015f180) (1) Data frame sent\nI0511 00:20:29.462019 1457 log.go:172] (0xc000604790) (0xc00015f180) Stream removed, broadcasting: 1\nI0511 00:20:29.462039 1457 log.go:172] (0xc000604790) Go away received\nI0511 00:20:29.462488 1457 log.go:172] (0xc000604790) (0xc00015f180) Stream removed, broadcasting: 1\nI0511 00:20:29.462514 1457 log.go:172] (0xc000604790) (0xc000360460) Stream removed, broadcasting: 3\nI0511 00:20:29.462529 1457 log.go:172] (0xc000604790) (0xc00025d040) Stream removed, broadcasting: 5\n" May 11 00:20:29.468: INFO: stdout: "\naffinity-clusterip-transition-g4sqj\naffinity-clusterip-transition-g4sqj\naffinity-clusterip-transition-5286l\naffinity-clusterip-transition-g4sqj\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-5286l\naffinity-clusterip-transition-g4sqj\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-5286l\naffinity-clusterip-transition-g4sqj\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-5286l\naffinity-clusterip-transition-g4sqj\naffinity-clusterip-transition-g4sqj\naffinity-clusterip-transition-5286l" May 11 00:20:29.468: INFO: Received response from host: May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-g4sqj May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-g4sqj May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-5286l May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-g4sqj May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-5286l May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-g4sqj May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-5286l May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-g4sqj May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-5286l May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-g4sqj May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-g4sqj May 11 00:20:29.468: INFO: Received response from host: affinity-clusterip-transition-5286l May 11 00:20:29.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3114 execpod-affinitysgj5k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.81.95:80/ ; done' May 11 00:20:29.908: INFO: stderr: "I0511 00:20:29.759886 1478 log.go:172] (0xc0000e9600) (0xc000c1a5a0) Create stream\nI0511 00:20:29.759945 1478 log.go:172] (0xc0000e9600) (0xc000c1a5a0) Stream added, broadcasting: 1\nI0511 00:20:29.768649 1478 log.go:172] (0xc0000e9600) Reply frame received for 1\nI0511 00:20:29.768796 1478 log.go:172] (0xc0000e9600) (0xc000662640) Create stream\nI0511 00:20:29.768890 1478 log.go:172] (0xc0000e9600) (0xc000662640) Stream added, broadcasting: 3\nI0511 00:20:29.770642 1478 log.go:172] (0xc0000e9600) Reply frame received for 3\nI0511 00:20:29.770686 1478 log.go:172] (0xc0000e9600) (0xc00025c1e0) Create stream\nI0511 00:20:29.770702 1478 log.go:172] (0xc0000e9600) (0xc00025c1e0) Stream added, broadcasting: 5\nI0511 00:20:29.772179 1478 log.go:172] (0xc0000e9600) Reply frame received for 5\nI0511 00:20:29.831263 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.831312 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.831341 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.831380 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.831417 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.831443 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.833611 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.833646 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.833674 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.834142 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.834159 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.834171 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.834201 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.834223 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.834233 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.838394 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.838412 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.838424 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.839117 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.839150 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.839170 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.839203 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.839223 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.839240 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.845444 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.845466 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.845482 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.846069 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.846105 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.846151 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.846164 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.846214 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.846246 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.850745 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.850763 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.850777 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.851206 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.851227 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.851236 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.851250 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.851257 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.851265 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.854733 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.854760 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.854777 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.855113 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.855132 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.855154 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.855166 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.855182 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\nI0511 00:20:29.855196 1478 log.go:172] (0xc000662640) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.858542 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.858557 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.858564 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.858866 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.858904 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.858922 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.858948 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.858966 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.858990 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.862273 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.862313 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.862348 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.862650 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.862674 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.862698 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.862720 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.862743 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.862766 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.866177 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.866207 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.866240 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.866514 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.866559 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.866579 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.866606 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.866630 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.866660 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\nI0511 00:20:29.866682 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.866702 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.866769 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\nI0511 00:20:29.870285 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.870307 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.870327 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.871216 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.871237 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.871245 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.871255 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.871261 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.871267 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.874464 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.874493 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.874521 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.874955 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.874967 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.874972 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.875009 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.875034 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.875060 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.879032 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.879046 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.879058 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.879451 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.879470 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.879476 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.879496 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.879517 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.879532 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.883322 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.883345 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.883366 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.883901 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.883926 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.883941 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.883988 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.884010 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.884026 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.887180 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.887204 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.887225 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.887963 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.887975 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.887985 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\nI0511 00:20:29.887989 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.887993 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.888004 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\nI0511 00:20:29.888054 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.888080 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.888103 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.891552 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.891565 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.891570 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.892098 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.892118 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.892130 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.892155 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.892170 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.892181 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.896254 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.896275 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.896288 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.896700 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.896714 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.896720 1478 log.go:172] (0xc00025c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.81.95:80/\nI0511 00:20:29.896815 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.896836 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.896860 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.900561 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.900574 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.900580 1478 log.go:172] (0xc000662640) (3) Data frame sent\nI0511 00:20:29.901357 1478 log.go:172] (0xc0000e9600) Data frame received for 3\nI0511 00:20:29.901387 1478 log.go:172] (0xc000662640) (3) Data frame handling\nI0511 00:20:29.901544 1478 log.go:172] (0xc0000e9600) Data frame received for 5\nI0511 00:20:29.901576 1478 log.go:172] (0xc00025c1e0) (5) Data frame handling\nI0511 00:20:29.903418 1478 log.go:172] (0xc0000e9600) Data frame received for 1\nI0511 00:20:29.903443 1478 log.go:172] (0xc000c1a5a0) (1) Data frame handling\nI0511 00:20:29.903460 1478 log.go:172] (0xc000c1a5a0) (1) Data frame sent\nI0511 00:20:29.903474 1478 log.go:172] (0xc0000e9600) (0xc000c1a5a0) Stream removed, broadcasting: 1\nI0511 00:20:29.903496 1478 log.go:172] (0xc0000e9600) Go away received\nI0511 00:20:29.903884 1478 log.go:172] (0xc0000e9600) (0xc000c1a5a0) Stream removed, broadcasting: 1\nI0511 00:20:29.903910 1478 log.go:172] (0xc0000e9600) (0xc000662640) Stream removed, broadcasting: 3\nI0511 00:20:29.903921 1478 log.go:172] (0xc0000e9600) (0xc00025c1e0) Stream removed, broadcasting: 5\n" May 11 00:20:29.909: INFO: stdout: "\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6\naffinity-clusterip-transition-ld5c6" May 11 00:20:29.909: INFO: Received response from host: May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Received response from host: affinity-clusterip-transition-ld5c6 May 11 00:20:29.909: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-3114, will wait for the garbage collector to delete the pods May 11 00:20:30.043: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.308173ms May 11 00:20:30.443: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.247589ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:20:45.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3114" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.171 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":110,"skipped":1686,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:20:45.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 11 00:20:45.405: INFO: Waiting up to 5m0s for pod "var-expansion-641b44bc-f101-4040-9c55-4a782465cf05" in namespace "var-expansion-1002" to be "Succeeded or Failed" May 11 00:20:45.444: INFO: Pod "var-expansion-641b44bc-f101-4040-9c55-4a782465cf05": Phase="Pending", Reason="", readiness=false. Elapsed: 38.885353ms May 11 00:20:47.763: INFO: Pod "var-expansion-641b44bc-f101-4040-9c55-4a782465cf05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3575404s May 11 00:20:49.766: INFO: Pod "var-expansion-641b44bc-f101-4040-9c55-4a782465cf05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.361206506s STEP: Saw pod success May 11 00:20:49.766: INFO: Pod "var-expansion-641b44bc-f101-4040-9c55-4a782465cf05" satisfied condition "Succeeded or Failed" May 11 00:20:49.769: INFO: Trying to get logs from node latest-worker pod var-expansion-641b44bc-f101-4040-9c55-4a782465cf05 container dapi-container: STEP: delete the pod May 11 00:20:49.844: INFO: Waiting for pod var-expansion-641b44bc-f101-4040-9c55-4a782465cf05 to disappear May 11 00:20:49.855: INFO: Pod var-expansion-641b44bc-f101-4040-9c55-4a782465cf05 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:20:49.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1002" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":111,"skipped":1694,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:20:49.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4987 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4987 STEP: Creating statefulset with conflicting port in namespace statefulset-4987 STEP: Waiting until pod test-pod will start running in namespace statefulset-4987 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4987 May 11 00:20:56.132: INFO: Observed stateful pod in namespace: statefulset-4987, name: ss-0, uid: 9c0962e4-f79a-4c46-a126-b36b67ece4e7, status phase: Pending. Waiting for statefulset controller to delete. May 11 00:20:56.624: INFO: Observed stateful pod in namespace: statefulset-4987, name: ss-0, uid: 9c0962e4-f79a-4c46-a126-b36b67ece4e7, status phase: Failed. Waiting for statefulset controller to delete. May 11 00:20:56.654: INFO: Observed stateful pod in namespace: statefulset-4987, name: ss-0, uid: 9c0962e4-f79a-4c46-a126-b36b67ece4e7, status phase: Failed. Waiting for statefulset controller to delete. May 11 00:20:56.668: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4987 STEP: Removing pod with conflicting port in namespace statefulset-4987 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4987 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 00:21:02.838: INFO: Deleting all statefulset in ns statefulset-4987 May 11 00:21:02.841: INFO: Scaling statefulset ss to 0 May 11 00:21:22.882: INFO: Waiting for statefulset status.replicas updated to 0 May 11 00:21:22.885: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:22.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4987" for this suite. • [SLOW TEST:33.086 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":112,"skipped":1710,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:22.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:27.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9083" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1710,"failed":0} ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:27.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:21:27.194: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:31.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6794" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":1710,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:31.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:31.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-504" for this suite. STEP: Destroying namespace "nspatchtest-715bf473-9ddd-45dc-ab56-4828f2be8af5-2136" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":115,"skipped":1729,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:31.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 11 00:21:31.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2155' May 11 00:21:31.945: INFO: stderr: "" May 11 00:21:31.945: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 00:21:32.951: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:21:32.951: INFO: Found 0 / 1 May 11 00:21:33.949: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:21:33.949: INFO: Found 0 / 1 May 11 00:21:34.949: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:21:34.949: INFO: Found 0 / 1 May 11 00:21:35.950: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:21:35.950: INFO: Found 1 / 1 May 11 00:21:35.950: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 11 00:21:35.954: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:21:35.954: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 00:21:35.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-xlbcl --namespace=kubectl-2155 -p {"metadata":{"annotations":{"x":"y"}}}' May 11 00:21:36.074: INFO: stderr: "" May 11 00:21:36.074: INFO: stdout: "pod/agnhost-master-xlbcl patched\n" STEP: checking annotations May 11 00:21:36.090: INFO: Selector matched 1 pods for map[app:agnhost] May 11 00:21:36.090: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:36.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2155" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":116,"skipped":1729,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:36.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:21:36.192: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-838aed83-cc99-4fde-b4df-b3da6e524950" in namespace "security-context-test-1322" to be "Succeeded or Failed" May 11 00:21:36.199: INFO: Pod "busybox-readonly-false-838aed83-cc99-4fde-b4df-b3da6e524950": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219796ms May 11 00:21:38.201: INFO: Pod "busybox-readonly-false-838aed83-cc99-4fde-b4df-b3da6e524950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009173991s May 11 00:21:40.205: INFO: Pod "busybox-readonly-false-838aed83-cc99-4fde-b4df-b3da6e524950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012663607s May 11 00:21:40.205: INFO: Pod "busybox-readonly-false-838aed83-cc99-4fde-b4df-b3da6e524950" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:40.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1322" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":1730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:40.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:21:40.335: INFO: Creating deployment "test-recreate-deployment" May 11 00:21:40.391: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 11 00:21:40.420: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 11 00:21:42.428: INFO: Waiting deployment "test-recreate-deployment" to complete May 11 00:21:42.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753300, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753300, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753300, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753300, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:21:44.435: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 11 00:21:44.443: INFO: Updating deployment test-recreate-deployment May 11 00:21:44.443: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 00:21:45.224: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7433 /apis/apps/v1/namespaces/deployment-7433/deployments/test-recreate-deployment 59d0142f-4252-4dea-9811-daaa6a54f3d7 3215455 2 2020-05-11 00:21:40 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-11 00:21:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 00:21:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00219b618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 00:21:44 +0000 UTC,LastTransitionTime:2020-05-11 00:21:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-11 00:21:44 +0000 UTC,LastTransitionTime:2020-05-11 00:21:40 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 11 00:21:45.276: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-7433 /apis/apps/v1/namespaces/deployment-7433/replicasets/test-recreate-deployment-d5667d9c7 a173ac19-d64f-4e47-9c85-e144935b04a3 3215454 1 2020-05-11 00:21:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 59d0142f-4252-4dea-9811-daaa6a54f3d7 0xc005959720 0xc005959721}] [] [{kube-controller-manager Update apps/v1 2020-05-11 00:21:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59d0142f-4252-4dea-9811-daaa6a54f3d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005959798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 00:21:45.276: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 11 00:21:45.276: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-7433 /apis/apps/v1/namespaces/deployment-7433/replicasets/test-recreate-deployment-6d65b9f6d8 aee37d14-a641-4892-9cb2-4dae146c5598 3215444 2 2020-05-11 00:21:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 59d0142f-4252-4dea-9811-daaa6a54f3d7 0xc0059595f7 0xc0059595f8}] [] [{kube-controller-manager Update apps/v1 2020-05-11 00:21:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59d0142f-4252-4dea-9811-daaa6a54f3d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0059596a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 00:21:45.281: INFO: Pod "test-recreate-deployment-d5667d9c7-trwnf" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-trwnf test-recreate-deployment-d5667d9c7- deployment-7433 /api/v1/namespaces/deployment-7433/pods/test-recreate-deployment-d5667d9c7-trwnf 5404bd97-6866-4be0-b00d-ea1dbf57ecba 3215456 0 2020-05-11 00:21:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 a173ac19-d64f-4e47-9c85-e144935b04a3 0xc005959c60 0xc005959c61}] [] [{kube-controller-manager Update v1 2020-05-11 00:21:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a173ac19-d64f-4e47-9c85-e144935b04a3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 00:21:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m8qtg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m8qtg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m8qtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:21:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:21:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:21:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:21:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 00:21:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:45.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7433" for this suite. • [SLOW TEST:5.103 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":118,"skipped":1770,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:45.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 00:21:53.565: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 00:21:53.612: INFO: Pod pod-with-poststart-exec-hook still exists May 11 00:21:55.613: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 00:21:55.618: INFO: Pod pod-with-poststart-exec-hook still exists May 11 00:21:57.612: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 00:21:57.617: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:21:57.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7496" for this suite. • [SLOW TEST:12.291 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":119,"skipped":1772,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:21:57.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 11 00:21:57.810: INFO: Waiting up to 5m0s for pod "var-expansion-240fc971-8489-4ca8-b30e-56d957135c58" in namespace "var-expansion-3499" to be "Succeeded or Failed" May 11 00:21:57.846: INFO: Pod "var-expansion-240fc971-8489-4ca8-b30e-56d957135c58": Phase="Pending", Reason="", readiness=false. Elapsed: 36.256506ms May 11 00:21:59.855: INFO: Pod "var-expansion-240fc971-8489-4ca8-b30e-56d957135c58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045481581s May 11 00:22:01.876: INFO: Pod "var-expansion-240fc971-8489-4ca8-b30e-56d957135c58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066097791s STEP: Saw pod success May 11 00:22:01.876: INFO: Pod "var-expansion-240fc971-8489-4ca8-b30e-56d957135c58" satisfied condition "Succeeded or Failed" May 11 00:22:01.878: INFO: Trying to get logs from node latest-worker pod var-expansion-240fc971-8489-4ca8-b30e-56d957135c58 container dapi-container: STEP: delete the pod May 11 00:22:01.914: INFO: Waiting for pod var-expansion-240fc971-8489-4ca8-b30e-56d957135c58 to disappear May 11 00:22:01.924: INFO: Pod var-expansion-240fc971-8489-4ca8-b30e-56d957135c58 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:22:01.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3499" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":1776,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:22:01.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 11 00:22:02.005: INFO: Pod name pod-release: Found 0 pods out of 1 May 11 00:22:07.080: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:22:07.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8618" for this suite. • [SLOW TEST:5.297 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":121,"skipped":1787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:22:07.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:22:07.758: INFO: Create a RollingUpdate DaemonSet May 11 00:22:07.761: INFO: Check that daemon pods launch on every node of the cluster May 11 00:22:07.805: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:07.822: INFO: Number of nodes with available pods: 0 May 11 00:22:07.822: INFO: Node latest-worker is running more than one daemon pod May 11 00:22:08.826: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:08.829: INFO: Number of nodes with available pods: 0 May 11 00:22:08.830: INFO: Node latest-worker is running more than one daemon pod May 11 00:22:10.034: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:10.200: INFO: Number of nodes with available pods: 0 May 11 00:22:10.200: INFO: Node latest-worker is running more than one daemon pod May 11 00:22:10.887: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:10.891: INFO: Number of nodes with available pods: 0 May 11 00:22:10.891: INFO: Node latest-worker is running more than one daemon pod May 11 00:22:11.826: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:11.828: INFO: Number of nodes with available pods: 0 May 11 00:22:11.828: INFO: Node latest-worker is running more than one daemon pod May 11 00:22:12.932: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:12.936: INFO: Number of nodes with available pods: 2 May 11 00:22:12.936: INFO: Number of running nodes: 2, number of available pods: 2 May 11 00:22:12.936: INFO: Update the DaemonSet to trigger a rollout May 11 00:22:12.943: INFO: Updating DaemonSet daemon-set May 11 00:22:25.013: INFO: Roll back the DaemonSet before rollout is complete May 11 00:22:25.019: INFO: Updating DaemonSet daemon-set May 11 00:22:25.019: INFO: Make sure DaemonSet rollback is complete May 11 00:22:25.061: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:25.061: INFO: Pod daemon-set-kf76m is not available May 11 00:22:25.128: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:26.132: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:26.132: INFO: Pod daemon-set-kf76m is not available May 11 00:22:26.136: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:27.132: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:27.132: INFO: Pod daemon-set-kf76m is not available May 11 00:22:27.136: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:28.133: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:28.133: INFO: Pod daemon-set-kf76m is not available May 11 00:22:28.138: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:29.133: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:29.133: INFO: Pod daemon-set-kf76m is not available May 11 00:22:29.137: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:30.132: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:30.132: INFO: Pod daemon-set-kf76m is not available May 11 00:22:30.137: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:31.134: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:31.134: INFO: Pod daemon-set-kf76m is not available May 11 00:22:31.139: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:32.133: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:32.133: INFO: Pod daemon-set-kf76m is not available May 11 00:22:32.138: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:33.133: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:33.133: INFO: Pod daemon-set-kf76m is not available May 11 00:22:33.138: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:34.133: INFO: Wrong image for pod: daemon-set-kf76m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 00:22:34.133: INFO: Pod daemon-set-kf76m is not available May 11 00:22:34.138: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 00:22:35.131: INFO: Pod daemon-set-wfk2t is not available May 11 00:22:35.135: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6779, will wait for the garbage collector to delete the pods May 11 00:22:35.203: INFO: Deleting DaemonSet.extensions daemon-set took: 8.048434ms May 11 00:22:35.503: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.230803ms May 11 00:22:38.007: INFO: Number of nodes with available pods: 0 May 11 00:22:38.007: INFO: Number of running nodes: 0, number of available pods: 0 May 11 00:22:38.010: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6779/daemonsets","resourceVersion":"3215846"},"items":null} May 11 00:22:38.013: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6779/pods","resourceVersion":"3215846"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:22:38.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6779" for this suite. • [SLOW TEST:30.799 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":122,"skipped":1819,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:22:38.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:22:38.565: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:22:40.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753358, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753358, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753358, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753358, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:22:43.613: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:22:43.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9313" for this suite. STEP: Destroying namespace "webhook-9313-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.920 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":123,"skipped":1827,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:22:43.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 11 00:22:48.072: INFO: &Pod{ObjectMeta:{send-events-dedc1c62-7f45-4fc1-af14-c2d81317cc8c events-6478 /api/v1/namespaces/events-6478/pods/send-events-dedc1c62-7f45-4fc1-af14-c2d81317cc8c fbaf6bd7-b3e9-4762-8b56-bd6d2a9e12ea 3215970 0 2020-05-11 00:22:44 +0000 UTC map[name:foo time:23400697] map[] [] [] [{e2e.test Update v1 2020-05-11 00:22:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 00:22:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.88\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x7gc4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x7gc4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x7gc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:22:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:22:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.88,StartTime:2020-05-11 00:22:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 00:22:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://e8c73a2b69ec0f60dd16050070ba9c3bf25347fd1bcbe815a26452a5c336ccfc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 11 00:22:50.080: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 11 00:22:52.095: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:22:52.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6478" for this suite. • [SLOW TEST:8.162 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":124,"skipped":1834,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:22:52.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 11 00:22:52.214: INFO: Waiting up to 1m0s for all nodes to be ready May 11 00:23:52.238: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:23:52.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 11 00:23:56.352: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:24:14.643: INFO: pods created so far: [1 1 1] May 11 00:24:14.643: INFO: length of pods created so far: 3 May 11 00:24:30.652: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:24:37.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-654" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:24:37.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3347" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:105.684 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":125,"skipped":1848,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:24:37.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 00:24:41.941: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:24:42.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-515" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":126,"skipped":1867,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:24:42.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 11 00:24:42.140: INFO: Waiting up to 5m0s for pod "pod-0f5550ea-aa76-4869-ae22-560cb854dc2c" in namespace "emptydir-8171" to be "Succeeded or Failed" May 11 00:24:42.143: INFO: Pod "pod-0f5550ea-aa76-4869-ae22-560cb854dc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.271105ms May 11 00:24:44.310: INFO: Pod "pod-0f5550ea-aa76-4869-ae22-560cb854dc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169475684s May 11 00:24:46.313: INFO: Pod "pod-0f5550ea-aa76-4869-ae22-560cb854dc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172377464s May 11 00:24:48.316: INFO: Pod "pod-0f5550ea-aa76-4869-ae22-560cb854dc2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.175919201s STEP: Saw pod success May 11 00:24:48.316: INFO: Pod "pod-0f5550ea-aa76-4869-ae22-560cb854dc2c" satisfied condition "Succeeded or Failed" May 11 00:24:48.318: INFO: Trying to get logs from node latest-worker pod pod-0f5550ea-aa76-4869-ae22-560cb854dc2c container test-container: STEP: delete the pod May 11 00:24:48.418: INFO: Waiting for pod pod-0f5550ea-aa76-4869-ae22-560cb854dc2c to disappear May 11 00:24:48.425: INFO: Pod pod-0f5550ea-aa76-4869-ae22-560cb854dc2c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:24:48.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8171" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":1867,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:24:48.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-e09de39b-c48e-4dc2-b8fb-d75f58ff8aab STEP: Creating a pod to test consume secrets May 11 00:24:48.511: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d" in namespace "projected-8583" to be "Succeeded or Failed" May 11 00:24:48.551: INFO: Pod "pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.632462ms May 11 00:24:50.556: INFO: Pod "pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045488366s May 11 00:24:52.644: INFO: Pod "pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133217897s STEP: Saw pod success May 11 00:24:52.644: INFO: Pod "pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d" satisfied condition "Succeeded or Failed" May 11 00:24:52.647: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d container projected-secret-volume-test: STEP: delete the pod May 11 00:24:52.727: INFO: Waiting for pod pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d to disappear May 11 00:24:52.736: INFO: Pod pod-projected-secrets-b1369d6b-0a79-4438-8af1-cc810ec4fd7d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:24:52.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8583" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":1888,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:24:52.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1476 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 11 00:24:52.926: INFO: Found 0 stateful pods, waiting for 3 May 11 00:25:02.931: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 00:25:02.931: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 00:25:02.931: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 00:25:12.934: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 00:25:12.934: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 00:25:12.934: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 00:25:12.961: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 11 00:25:23.027: INFO: Updating stateful set ss2 May 11 00:25:23.051: INFO: Waiting for Pod statefulset-1476/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 11 00:25:33.704: INFO: Found 2 stateful pods, waiting for 3 May 11 00:25:43.710: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 00:25:43.710: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 00:25:43.710: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 11 00:25:43.734: INFO: Updating stateful set ss2 May 11 00:25:43.783: INFO: Waiting for Pod statefulset-1476/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 00:25:53.791: INFO: Waiting for Pod statefulset-1476/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 00:26:03.809: INFO: Updating stateful set ss2 May 11 00:26:03.849: INFO: Waiting for StatefulSet statefulset-1476/ss2 to complete update May 11 00:26:03.849: INFO: Waiting for Pod statefulset-1476/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 00:26:13.856: INFO: Waiting for StatefulSet statefulset-1476/ss2 to complete update May 11 00:26:13.857: INFO: Waiting for Pod statefulset-1476/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 00:26:23.857: INFO: Deleting all statefulset in ns statefulset-1476 May 11 00:26:23.860: INFO: Scaling statefulset ss2 to 0 May 11 00:26:53.917: INFO: Waiting for statefulset status.replicas updated to 0 May 11 00:26:53.920: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:26:53.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1476" for this suite. • [SLOW TEST:121.162 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":129,"skipped":1889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:26:53.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-1586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1586 to expose endpoints map[] May 11 00:26:54.093: INFO: Get endpoints failed (17.139737ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 11 00:26:55.098: INFO: successfully validated that service multi-endpoint-test in namespace services-1586 exposes endpoints map[] (1.021395693s elapsed) STEP: Creating pod pod1 in namespace services-1586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1586 to expose endpoints map[pod1:[100]] May 11 00:26:59.331: INFO: successfully validated that service multi-endpoint-test in namespace services-1586 exposes endpoints map[pod1:[100]] (4.225324408s elapsed) STEP: Creating pod pod2 in namespace services-1586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1586 to expose endpoints map[pod1:[100] pod2:[101]] May 11 00:27:03.515: INFO: successfully validated that service multi-endpoint-test in namespace services-1586 exposes endpoints map[pod1:[100] pod2:[101]] (4.180295101s elapsed) STEP: Deleting pod pod1 in namespace services-1586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1586 to expose endpoints map[pod2:[101]] May 11 00:27:04.587: INFO: successfully validated that service multi-endpoint-test in namespace services-1586 exposes endpoints map[pod2:[101]] (1.067672442s elapsed) STEP: Deleting pod pod2 in namespace services-1586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1586 to expose endpoints map[] May 11 00:27:05.638: INFO: successfully validated that service multi-endpoint-test in namespace services-1586 exposes endpoints map[] (1.038639619s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:27:05.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1586" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.769 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":130,"skipped":1921,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:27:05.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-7pqw STEP: Creating a pod to test atomic-volume-subpath May 11 00:27:05.966: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7pqw" in namespace "subpath-8242" to be "Succeeded or Failed" May 11 00:27:05.980: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.105022ms May 11 00:27:07.984: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017593891s May 11 00:27:09.988: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 4.021498744s May 11 00:27:11.992: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 6.025847987s May 11 00:27:13.997: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 8.030733046s May 11 00:27:16.001: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 10.034660526s May 11 00:27:18.005: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 12.038949387s May 11 00:27:20.012: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 14.045747127s May 11 00:27:22.016: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 16.049550075s May 11 00:27:24.020: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 18.053462828s May 11 00:27:26.024: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 20.057701774s May 11 00:27:28.027: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Running", Reason="", readiness=true. Elapsed: 22.06082394s May 11 00:27:30.142: INFO: Pod "pod-subpath-test-projected-7pqw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.17564172s STEP: Saw pod success May 11 00:27:30.142: INFO: Pod "pod-subpath-test-projected-7pqw" satisfied condition "Succeeded or Failed" May 11 00:27:30.145: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-7pqw container test-container-subpath-projected-7pqw: STEP: delete the pod May 11 00:27:30.275: INFO: Waiting for pod pod-subpath-test-projected-7pqw to disappear May 11 00:27:30.282: INFO: Pod pod-subpath-test-projected-7pqw no longer exists STEP: Deleting pod pod-subpath-test-projected-7pqw May 11 00:27:30.282: INFO: Deleting pod "pod-subpath-test-projected-7pqw" in namespace "subpath-8242" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:27:30.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8242" for this suite. • [SLOW TEST:24.577 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":131,"skipped":1969,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:27:30.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:27:31.178: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:27:33.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753651, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753651, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753651, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753651, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:27:36.456: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:27:36.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8744" for this suite. STEP: Destroying namespace "webhook-8744-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.411 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":132,"skipped":1979,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:27:36.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 11 00:27:36.746: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:27:54.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6305" for this suite. • [SLOW TEST:18.072 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":133,"skipped":1983,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:27:54.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:27:55.439: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:27:57.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753675, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753675, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753675, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753675, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:28:00.483: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:28:00.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8544" for this suite. STEP: Destroying namespace "webhook-8544-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.922 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":134,"skipped":1985,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:28:00.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-5beaf165-28f0-4c00-8b59-15992e0cee83 in namespace container-probe-1084 May 11 00:28:04.863: INFO: Started pod liveness-5beaf165-28f0-4c00-8b59-15992e0cee83 in namespace container-probe-1084 STEP: checking the pod's current state and verifying that restartCount is present May 11 00:28:04.866: INFO: Initial restart count of pod liveness-5beaf165-28f0-4c00-8b59-15992e0cee83 is 0 May 11 00:28:28.919: INFO: Restart count of pod container-probe-1084/liveness-5beaf165-28f0-4c00-8b59-15992e0cee83 is now 1 (24.052958052s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:28:28.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1084" for this suite. • [SLOW TEST:28.319 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":1998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:28:29.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-76f5474a-754c-4cf0-91a3-ce13b19e89c1 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:28:35.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8947" for this suite. • [SLOW TEST:6.421 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:28:35.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:28:35.515: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 11 00:28:38.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 create -f -' May 11 00:28:41.898: INFO: stderr: "" May 11 00:28:41.898: INFO: stdout: "e2e-test-crd-publish-openapi-4979-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 00:28:41.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 delete e2e-test-crd-publish-openapi-4979-crds test-foo' May 11 00:28:42.017: INFO: stderr: "" May 11 00:28:42.017: INFO: stdout: "e2e-test-crd-publish-openapi-4979-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 11 00:28:42.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 apply -f -' May 11 00:28:42.367: INFO: stderr: "" May 11 00:28:42.367: INFO: stdout: "e2e-test-crd-publish-openapi-4979-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 00:28:42.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 delete e2e-test-crd-publish-openapi-4979-crds test-foo' May 11 00:28:42.494: INFO: stderr: "" May 11 00:28:42.494: INFO: stdout: "e2e-test-crd-publish-openapi-4979-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 11 00:28:42.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 create -f -' May 11 00:28:42.748: INFO: rc: 1 May 11 00:28:42.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 apply -f -' May 11 00:28:43.015: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 11 00:28:43.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 create -f -' May 11 00:28:43.253: INFO: rc: 1 May 11 00:28:43.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2187 apply -f -' May 11 00:28:43.519: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 11 00:28:43.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4979-crds' May 11 00:28:43.784: INFO: stderr: "" May 11 00:28:43.784: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4979-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 11 00:28:43.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4979-crds.metadata' May 11 00:28:44.051: INFO: stderr: "" May 11 00:28:44.051: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4979-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 11 00:28:44.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4979-crds.spec' May 11 00:28:44.316: INFO: stderr: "" May 11 00:28:44.316: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4979-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 11 00:28:44.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4979-crds.spec.bars' May 11 00:28:44.657: INFO: stderr: "" May 11 00:28:44.657: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4979-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 11 00:28:44.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4979-crds.spec.bars2' May 11 00:28:44.887: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:28:47.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2187" for this suite. • [SLOW TEST:12.391 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":137,"skipped":2104,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:28:47.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:28:48.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:28:50.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753728, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753728, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753728, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724753728, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:28:53.692: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:28:53.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7021-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:28:54.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2887" for this suite. STEP: Destroying namespace "webhook-2887-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.207 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":138,"skipped":2122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:28:55.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:28:55.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc" in namespace "projected-7148" to be "Succeeded or Failed" May 11 00:28:55.160: INFO: Pod "downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc": Phase="Pending", Reason="", readiness=false. Elapsed: 37.40725ms May 11 00:28:57.166: INFO: Pod "downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042435131s May 11 00:28:59.748: INFO: Pod "downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.624532901s May 11 00:29:01.752: INFO: Pod "downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.628752058s STEP: Saw pod success May 11 00:29:01.752: INFO: Pod "downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc" satisfied condition "Succeeded or Failed" May 11 00:29:01.754: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc container client-container: STEP: delete the pod May 11 00:29:01.864: INFO: Waiting for pod downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc to disappear May 11 00:29:01.903: INFO: Pod downwardapi-volume-26670a36-2f33-4427-a7d8-0ef0fb1532bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:29:01.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7148" for this suite. • [SLOW TEST:6.894 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:29:01.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-1995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1995 to expose endpoints map[] May 11 00:29:02.143: INFO: Get endpoints failed (62.449607ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 11 00:29:03.148: INFO: successfully validated that service endpoint-test2 in namespace services-1995 exposes endpoints map[] (1.067250085s elapsed) STEP: Creating pod pod1 in namespace services-1995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1995 to expose endpoints map[pod1:[80]] May 11 00:29:06.263: INFO: successfully validated that service endpoint-test2 in namespace services-1995 exposes endpoints map[pod1:[80]] (3.107032682s elapsed) STEP: Creating pod pod2 in namespace services-1995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1995 to expose endpoints map[pod1:[80] pod2:[80]] May 11 00:29:10.442: INFO: successfully validated that service endpoint-test2 in namespace services-1995 exposes endpoints map[pod1:[80] pod2:[80]] (4.176140212s elapsed) STEP: Deleting pod pod1 in namespace services-1995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1995 to expose endpoints map[pod2:[80]] May 11 00:29:11.485: INFO: successfully validated that service endpoint-test2 in namespace services-1995 exposes endpoints map[pod2:[80]] (1.03817506s elapsed) STEP: Deleting pod pod2 in namespace services-1995 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1995 to expose endpoints map[] May 11 00:29:12.542: INFO: successfully validated that service endpoint-test2 in namespace services-1995 exposes endpoints map[] (1.052179526s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:29:12.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1995" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.702 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":140,"skipped":2296,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:29:12.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 00:29:12.693: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 00:29:12.704: INFO: Waiting for terminating namespaces to be deleted... May 11 00:29:12.706: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 00:29:12.710: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 00:29:12.710: INFO: Container kindnet-cni ready: true, restart count 0 May 11 00:29:12.710: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 00:29:12.710: INFO: Container kube-proxy ready: true, restart count 0 May 11 00:29:12.710: INFO: pod1 from services-1995 started at 2020-05-11 00:29:03 +0000 UTC (1 container statuses recorded) May 11 00:29:12.710: INFO: Container pause ready: false, restart count 0 May 11 00:29:12.711: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 00:29:12.714: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 00:29:12.714: INFO: Container kindnet-cni ready: true, restart count 0 May 11 00:29:12.714: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 00:29:12.714: INFO: Container kube-proxy ready: true, restart count 0 May 11 00:29:12.714: INFO: pod2 from services-1995 started at 2020-05-11 00:29:06 +0000 UTC (1 container statuses recorded) May 11 00:29:12.714: INFO: Container pause ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d09b3fef-c055-44e9-8205-842456a4a117 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d09b3fef-c055-44e9-8205-842456a4a117 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d09b3fef-c055-44e9-8205-842456a4a117 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:34:21.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8543" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.397 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":141,"skipped":2317,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:34:21.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-db501db3-e2d4-4cef-89bc-3e7101bf9f82 STEP: Creating a pod to test consume secrets May 11 00:34:21.333: INFO: Waiting up to 5m0s for pod "pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9" in namespace "secrets-1569" to be "Succeeded or Failed" May 11 00:34:21.349: INFO: Pod "pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.969823ms May 11 00:34:23.353: INFO: Pod "pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019871629s May 11 00:34:25.358: INFO: Pod "pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024817693s STEP: Saw pod success May 11 00:34:25.358: INFO: Pod "pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9" satisfied condition "Succeeded or Failed" May 11 00:34:25.361: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9 container secret-volume-test: STEP: delete the pod May 11 00:34:25.537: INFO: Waiting for pod pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9 to disappear May 11 00:34:25.545: INFO: Pod pod-secrets-0dee04a8-45a5-4dd8-bc9a-cbf3752c6ac9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:34:25.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1569" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:34:25.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 11 00:34:30.231: INFO: Successfully updated pod "labelsupdate2596de12-b655-43ce-82db-b7bd6372d8b0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:34:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3976" for this suite. • [SLOW TEST:6.750 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2360,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:34:32.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-62c38d4e-ae7e-4fd5-8825-db355acf836a STEP: Creating a pod to test consume configMaps May 11 00:34:32.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7" in namespace "projected-8054" to be "Succeeded or Failed" May 11 00:34:32.510: INFO: Pod "pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7": Phase="Pending", Reason="", readiness=false. Elapsed: 45.595993ms May 11 00:34:34.514: INFO: Pod "pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049344474s May 11 00:34:36.525: INFO: Pod "pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060411591s STEP: Saw pod success May 11 00:34:36.525: INFO: Pod "pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7" satisfied condition "Succeeded or Failed" May 11 00:34:36.527: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7 container projected-configmap-volume-test: STEP: delete the pod May 11 00:34:36.577: INFO: Waiting for pod pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7 to disappear May 11 00:34:36.588: INFO: Pod pod-projected-configmaps-c67d8357-07ad-4f64-a70a-a66adbe594a7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:34:36.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8054" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:34:36.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 11 00:34:36.872: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 11 00:34:37.206: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 11 00:34:39.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:34:41.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754077, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:34:44.126: INFO: Waited 620.058184ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:34:44.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4566" for this suite. • [SLOW TEST:8.359 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":145,"skipped":2399,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:34:44.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-9d206ac2-b3f8-44b8-a739-b4213afe1e06 STEP: Creating configMap with name cm-test-opt-upd-97fd07a0-0fa6-4ae5-a55b-14cef7f2aa77 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9d206ac2-b3f8-44b8-a739-b4213afe1e06 STEP: Updating configmap cm-test-opt-upd-97fd07a0-0fa6-4ae5-a55b-14cef7f2aa77 STEP: Creating configMap with name cm-test-opt-create-783f0df6-de71-4ff1-a4fb-8946654e4000 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:34:53.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4653" for this suite. • [SLOW TEST:8.326 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2404,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:34:53.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 00:34:53.400: INFO: Waiting up to 5m0s for pod "pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9" in namespace "emptydir-7761" to be "Succeeded or Failed" May 11 00:34:53.410: INFO: Pod "pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.980525ms May 11 00:34:55.445: INFO: Pod "pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045127132s May 11 00:34:57.449: INFO: Pod "pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049120619s STEP: Saw pod success May 11 00:34:57.449: INFO: Pod "pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9" satisfied condition "Succeeded or Failed" May 11 00:34:57.452: INFO: Trying to get logs from node latest-worker2 pod pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9 container test-container: STEP: delete the pod May 11 00:34:57.512: INFO: Waiting for pod pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9 to disappear May 11 00:34:57.523: INFO: Pod pod-f0cffde8-5e6b-41ef-847a-d29fd7d3cec9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:34:57.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7761" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:34:57.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-xfqs STEP: Creating a pod to test atomic-volume-subpath May 11 00:34:57.807: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xfqs" in namespace "subpath-577" to be "Succeeded or Failed" May 11 00:34:57.843: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Pending", Reason="", readiness=false. Elapsed: 36.502855ms May 11 00:34:59.864: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057208651s May 11 00:35:01.868: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 4.061398254s May 11 00:35:03.874: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 6.067300759s May 11 00:35:05.878: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 8.071707562s May 11 00:35:07.883: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 10.075963866s May 11 00:35:09.887: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 12.080593768s May 11 00:35:11.892: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 14.085409642s May 11 00:35:13.896: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 16.089186236s May 11 00:35:15.901: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 18.09415916s May 11 00:35:17.905: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 20.098489159s May 11 00:35:19.909: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 22.102670808s May 11 00:35:21.924: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Running", Reason="", readiness=true. Elapsed: 24.117400771s May 11 00:35:23.929: INFO: Pod "pod-subpath-test-downwardapi-xfqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.121822389s STEP: Saw pod success May 11 00:35:23.929: INFO: Pod "pod-subpath-test-downwardapi-xfqs" satisfied condition "Succeeded or Failed" May 11 00:35:23.932: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-xfqs container test-container-subpath-downwardapi-xfqs: STEP: delete the pod May 11 00:35:23.981: INFO: Waiting for pod pod-subpath-test-downwardapi-xfqs to disappear May 11 00:35:23.992: INFO: Pod pod-subpath-test-downwardapi-xfqs no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-xfqs May 11 00:35:23.992: INFO: Deleting pod "pod-subpath-test-downwardapi-xfqs" in namespace "subpath-577" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:35:23.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-577" for this suite. • [SLOW TEST:26.470 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":148,"skipped":2488,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:35:24.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-562 STEP: creating service affinity-nodeport-transition in namespace services-562 STEP: creating replication controller affinity-nodeport-transition in namespace services-562 I0511 00:35:24.128216 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-562, replica count: 3 I0511 00:35:27.178641 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:35:30.178894 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 00:35:30.188: INFO: Creating new exec pod May 11 00:35:35.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-562 execpod-affinitydscrp -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 11 00:35:35.499: INFO: stderr: "I0511 00:35:35.411002 1831 log.go:172] (0xc00075fb80) (0xc000151ea0) Create stream\nI0511 00:35:35.411076 1831 log.go:172] (0xc00075fb80) (0xc000151ea0) Stream added, broadcasting: 1\nI0511 00:35:35.413852 1831 log.go:172] (0xc00075fb80) Reply frame received for 1\nI0511 00:35:35.413901 1831 log.go:172] (0xc00075fb80) (0xc000628140) Create stream\nI0511 00:35:35.413911 1831 log.go:172] (0xc00075fb80) (0xc000628140) Stream added, broadcasting: 3\nI0511 00:35:35.414859 1831 log.go:172] (0xc00075fb80) Reply frame received for 3\nI0511 00:35:35.414898 1831 log.go:172] (0xc00075fb80) (0xc000660460) Create stream\nI0511 00:35:35.414912 1831 log.go:172] (0xc00075fb80) (0xc000660460) Stream added, broadcasting: 5\nI0511 00:35:35.415955 1831 log.go:172] (0xc00075fb80) Reply frame received for 5\nI0511 00:35:35.491280 1831 log.go:172] (0xc00075fb80) Data frame received for 3\nI0511 00:35:35.491440 1831 log.go:172] (0xc000628140) (3) Data frame handling\nI0511 00:35:35.491492 1831 log.go:172] (0xc00075fb80) Data frame received for 5\nI0511 00:35:35.491523 1831 log.go:172] (0xc000660460) (5) Data frame handling\nI0511 00:35:35.491553 1831 log.go:172] (0xc000660460) (5) Data frame sent\nI0511 00:35:35.491571 1831 log.go:172] (0xc00075fb80) Data frame received for 5\nI0511 00:35:35.491587 1831 log.go:172] (0xc000660460) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0511 00:35:35.493711 1831 log.go:172] (0xc00075fb80) Data frame received for 1\nI0511 00:35:35.493762 1831 log.go:172] (0xc000151ea0) (1) Data frame handling\nI0511 00:35:35.493798 1831 log.go:172] (0xc000151ea0) (1) Data frame sent\nI0511 00:35:35.493850 1831 log.go:172] (0xc00075fb80) (0xc000151ea0) Stream removed, broadcasting: 1\nI0511 00:35:35.493876 1831 log.go:172] (0xc00075fb80) Go away received\nI0511 00:35:35.494350 1831 log.go:172] (0xc00075fb80) (0xc000151ea0) Stream removed, broadcasting: 1\nI0511 00:35:35.494380 1831 log.go:172] (0xc00075fb80) (0xc000628140) Stream removed, broadcasting: 3\nI0511 00:35:35.494393 1831 log.go:172] (0xc00075fb80) (0xc000660460) Stream removed, broadcasting: 5\n" May 11 00:35:35.499: INFO: stdout: "" May 11 00:35:35.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-562 execpod-affinitydscrp -- /bin/sh -x -c nc -zv -t -w 2 10.104.156.168 80' May 11 00:35:35.716: INFO: stderr: "I0511 00:35:35.633900 1854 log.go:172] (0xc000b05130) (0xc0007ede00) Create stream\nI0511 00:35:35.633977 1854 log.go:172] (0xc000b05130) (0xc0007ede00) Stream added, broadcasting: 1\nI0511 00:35:35.639323 1854 log.go:172] (0xc000b05130) Reply frame received for 1\nI0511 00:35:35.639393 1854 log.go:172] (0xc000b05130) (0xc000718960) Create stream\nI0511 00:35:35.639410 1854 log.go:172] (0xc000b05130) (0xc000718960) Stream added, broadcasting: 3\nI0511 00:35:35.640446 1854 log.go:172] (0xc000b05130) Reply frame received for 3\nI0511 00:35:35.640484 1854 log.go:172] (0xc000b05130) (0xc000650460) Create stream\nI0511 00:35:35.640495 1854 log.go:172] (0xc000b05130) (0xc000650460) Stream added, broadcasting: 5\nI0511 00:35:35.641937 1854 log.go:172] (0xc000b05130) Reply frame received for 5\nI0511 00:35:35.710269 1854 log.go:172] (0xc000b05130) Data frame received for 3\nI0511 00:35:35.710298 1854 log.go:172] (0xc000718960) (3) Data frame handling\nI0511 00:35:35.710315 1854 log.go:172] (0xc000b05130) Data frame received for 5\nI0511 00:35:35.710320 1854 log.go:172] (0xc000650460) (5) Data frame handling\nI0511 00:35:35.710326 1854 log.go:172] (0xc000650460) (5) Data frame sent\nI0511 00:35:35.710332 1854 log.go:172] (0xc000b05130) Data frame received for 5\nI0511 00:35:35.710336 1854 log.go:172] (0xc000650460) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.156.168 80\nConnection to 10.104.156.168 80 port [tcp/http] succeeded!\nI0511 00:35:35.711634 1854 log.go:172] (0xc000b05130) Data frame received for 1\nI0511 00:35:35.711650 1854 log.go:172] (0xc0007ede00) (1) Data frame handling\nI0511 00:35:35.711659 1854 log.go:172] (0xc0007ede00) (1) Data frame sent\nI0511 00:35:35.711672 1854 log.go:172] (0xc000b05130) (0xc0007ede00) Stream removed, broadcasting: 1\nI0511 00:35:35.711752 1854 log.go:172] (0xc000b05130) Go away received\nI0511 00:35:35.711969 1854 log.go:172] (0xc000b05130) (0xc0007ede00) Stream removed, broadcasting: 1\nI0511 00:35:35.711987 1854 log.go:172] (0xc000b05130) (0xc000718960) Stream removed, broadcasting: 3\nI0511 00:35:35.711996 1854 log.go:172] (0xc000b05130) (0xc000650460) Stream removed, broadcasting: 5\n" May 11 00:35:35.716: INFO: stdout: "" May 11 00:35:35.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-562 execpod-affinitydscrp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31098' May 11 00:35:35.956: INFO: stderr: "I0511 00:35:35.869247 1875 log.go:172] (0xc0009551e0) (0xc000a6a780) Create stream\nI0511 00:35:35.869763 1875 log.go:172] (0xc0009551e0) (0xc000a6a780) Stream added, broadcasting: 1\nI0511 00:35:35.873825 1875 log.go:172] (0xc0009551e0) Reply frame received for 1\nI0511 00:35:35.873868 1875 log.go:172] (0xc0009551e0) (0xc000714c80) Create stream\nI0511 00:35:35.873877 1875 log.go:172] (0xc0009551e0) (0xc000714c80) Stream added, broadcasting: 3\nI0511 00:35:35.874512 1875 log.go:172] (0xc0009551e0) Reply frame received for 3\nI0511 00:35:35.874532 1875 log.go:172] (0xc0009551e0) (0xc000534d20) Create stream\nI0511 00:35:35.874541 1875 log.go:172] (0xc0009551e0) (0xc000534d20) Stream added, broadcasting: 5\nI0511 00:35:35.875159 1875 log.go:172] (0xc0009551e0) Reply frame received for 5\nI0511 00:35:35.949578 1875 log.go:172] (0xc0009551e0) Data frame received for 3\nI0511 00:35:35.949601 1875 log.go:172] (0xc000714c80) (3) Data frame handling\nI0511 00:35:35.949632 1875 log.go:172] (0xc0009551e0) Data frame received for 5\nI0511 00:35:35.949663 1875 log.go:172] (0xc000534d20) (5) Data frame handling\nI0511 00:35:35.949683 1875 log.go:172] (0xc000534d20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31098\nConnection to 172.17.0.13 31098 port [tcp/31098] succeeded!\nI0511 00:35:35.949694 1875 log.go:172] (0xc0009551e0) Data frame received for 5\nI0511 00:35:35.949718 1875 log.go:172] (0xc000534d20) (5) Data frame handling\nI0511 00:35:35.950878 1875 log.go:172] (0xc0009551e0) Data frame received for 1\nI0511 00:35:35.950898 1875 log.go:172] (0xc000a6a780) (1) Data frame handling\nI0511 00:35:35.950928 1875 log.go:172] (0xc000a6a780) (1) Data frame sent\nI0511 00:35:35.950947 1875 log.go:172] (0xc0009551e0) (0xc000a6a780) Stream removed, broadcasting: 1\nI0511 00:35:35.950962 1875 log.go:172] (0xc0009551e0) Go away received\nI0511 00:35:35.951451 1875 log.go:172] (0xc0009551e0) (0xc000a6a780) Stream removed, broadcasting: 1\nI0511 00:35:35.951473 1875 log.go:172] (0xc0009551e0) (0xc000714c80) Stream removed, broadcasting: 3\nI0511 00:35:35.951485 1875 log.go:172] (0xc0009551e0) (0xc000534d20) Stream removed, broadcasting: 5\n" May 11 00:35:35.957: INFO: stdout: "" May 11 00:35:35.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-562 execpod-affinitydscrp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31098' May 11 00:35:36.169: INFO: stderr: "I0511 00:35:36.089804 1896 log.go:172] (0xc00003a160) (0xc0004d8320) Create stream\nI0511 00:35:36.089861 1896 log.go:172] (0xc00003a160) (0xc0004d8320) Stream added, broadcasting: 1\nI0511 00:35:36.091737 1896 log.go:172] (0xc00003a160) Reply frame received for 1\nI0511 00:35:36.091794 1896 log.go:172] (0xc00003a160) (0xc0004ca280) Create stream\nI0511 00:35:36.091807 1896 log.go:172] (0xc00003a160) (0xc0004ca280) Stream added, broadcasting: 3\nI0511 00:35:36.092726 1896 log.go:172] (0xc00003a160) Reply frame received for 3\nI0511 00:35:36.092764 1896 log.go:172] (0xc00003a160) (0xc0004ace60) Create stream\nI0511 00:35:36.092775 1896 log.go:172] (0xc00003a160) (0xc0004ace60) Stream added, broadcasting: 5\nI0511 00:35:36.093903 1896 log.go:172] (0xc00003a160) Reply frame received for 5\nI0511 00:35:36.162966 1896 log.go:172] (0xc00003a160) Data frame received for 3\nI0511 00:35:36.162991 1896 log.go:172] (0xc0004ca280) (3) Data frame handling\nI0511 00:35:36.163055 1896 log.go:172] (0xc00003a160) Data frame received for 5\nI0511 00:35:36.163101 1896 log.go:172] (0xc0004ace60) (5) Data frame handling\nI0511 00:35:36.163136 1896 log.go:172] (0xc0004ace60) (5) Data frame sent\nI0511 00:35:36.163155 1896 log.go:172] (0xc00003a160) Data frame received for 5\nI0511 00:35:36.163170 1896 log.go:172] (0xc0004ace60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31098\nConnection to 172.17.0.12 31098 port [tcp/31098] succeeded!\nI0511 00:35:36.164417 1896 log.go:172] (0xc00003a160) Data frame received for 1\nI0511 00:35:36.164437 1896 log.go:172] (0xc0004d8320) (1) Data frame handling\nI0511 00:35:36.164448 1896 log.go:172] (0xc0004d8320) (1) Data frame sent\nI0511 00:35:36.164459 1896 log.go:172] (0xc00003a160) (0xc0004d8320) Stream removed, broadcasting: 1\nI0511 00:35:36.164493 1896 log.go:172] (0xc00003a160) Go away received\nI0511 00:35:36.164794 1896 log.go:172] (0xc00003a160) (0xc0004d8320) Stream removed, broadcasting: 1\nI0511 00:35:36.164810 1896 log.go:172] (0xc00003a160) (0xc0004ca280) Stream removed, broadcasting: 3\nI0511 00:35:36.164819 1896 log.go:172] (0xc00003a160) (0xc0004ace60) Stream removed, broadcasting: 5\n" May 11 00:35:36.170: INFO: stdout: "" May 11 00:35:36.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-562 execpod-affinitydscrp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31098/ ; done' May 11 00:35:36.474: INFO: stderr: "I0511 00:35:36.308271 1918 log.go:172] (0xc0009def20) (0xc00043d4a0) Create stream\nI0511 00:35:36.308325 1918 log.go:172] (0xc0009def20) (0xc00043d4a0) Stream added, broadcasting: 1\nI0511 00:35:36.311341 1918 log.go:172] (0xc0009def20) Reply frame received for 1\nI0511 00:35:36.311369 1918 log.go:172] (0xc0009def20) (0xc0002780a0) Create stream\nI0511 00:35:36.311378 1918 log.go:172] (0xc0009def20) (0xc0002780a0) Stream added, broadcasting: 3\nI0511 00:35:36.312138 1918 log.go:172] (0xc0009def20) Reply frame received for 3\nI0511 00:35:36.312183 1918 log.go:172] (0xc0009def20) (0xc000502500) Create stream\nI0511 00:35:36.312203 1918 log.go:172] (0xc0009def20) (0xc000502500) Stream added, broadcasting: 5\nI0511 00:35:36.312993 1918 log.go:172] (0xc0009def20) Reply frame received for 5\nI0511 00:35:36.381044 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.381076 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.381093 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.381317 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.381346 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.381372 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.388851 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.388873 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.388889 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.389420 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.389435 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.389446 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.389461 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.389470 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.389496 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.389512 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.389523 1918 log.go:172] (0xc000502500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.389547 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.394068 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.394101 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.394135 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.394782 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.394807 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.394823 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.394847 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.394861 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.394873 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/I0511 00:35:36.394894 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.394903 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.394916 1918 log.go:172] (0xc000502500) (5) Data frame sent\n\nI0511 00:35:36.398849 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.398870 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.398902 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.399254 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.399273 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.399284 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.399295 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.399301 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.399323 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.399340 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.399349 1918 log.go:172] (0xc000502500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.399379 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.405576 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.405618 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.405641 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.406071 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.406089 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.406109 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.406123 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.406146 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.406163 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.411076 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.411087 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.411093 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.411503 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.411529 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.411539 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.411552 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.411573 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.411586 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.417520 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.417539 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.417549 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.417984 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.418021 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.418040 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.418054 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.418074 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.418104 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.422282 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.422315 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.422353 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.422709 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.422722 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.422749 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.422781 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.422798 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.422821 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.428657 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.428687 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.428731 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.429258 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.429279 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.429287 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.429322 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.429345 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.429366 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.433382 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.433414 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.433445 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.433785 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.433798 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.433805 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.433833 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.433916 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.433956 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.438452 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.438485 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.438510 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.439274 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.439296 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.439324 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.439341 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.439353 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.439367 1918 log.go:172] (0xc000502500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.439389 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.439406 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.439428 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.444322 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.444352 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.444373 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.444781 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.444831 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.444859 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.444899 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.444919 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.444931 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.448723 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.448755 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.448800 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.449299 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.449320 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.449332 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.449351 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.449370 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.449401 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.453652 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.453668 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.453677 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.454190 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.454221 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.454235 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.454259 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.454270 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.454280 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.454306 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.454317 1918 log.go:172] (0xc000502500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.454338 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.457933 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.457955 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.457997 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.458368 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.458406 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.458422 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.458443 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.458467 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.458495 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.458509 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.458518 1918 log.go:172] (0xc000502500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.458544 1918 log.go:172] (0xc000502500) (5) Data frame sent\nI0511 00:35:36.462144 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.462175 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.462208 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.462704 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.462738 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.462754 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.462769 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.462779 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.462790 1918 log.go:172] (0xc000502500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.466320 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.466369 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.466408 1918 log.go:172] (0xc0002780a0) (3) Data frame sent\nI0511 00:35:36.466757 1918 log.go:172] (0xc0009def20) Data frame received for 3\nI0511 00:35:36.466779 1918 log.go:172] (0xc0002780a0) (3) Data frame handling\nI0511 00:35:36.466922 1918 log.go:172] (0xc0009def20) Data frame received for 5\nI0511 00:35:36.466939 1918 log.go:172] (0xc000502500) (5) Data frame handling\nI0511 00:35:36.468403 1918 log.go:172] (0xc0009def20) Data frame received for 1\nI0511 00:35:36.468438 1918 log.go:172] (0xc00043d4a0) (1) Data frame handling\nI0511 00:35:36.468459 1918 log.go:172] (0xc00043d4a0) (1) Data frame sent\nI0511 00:35:36.468478 1918 log.go:172] (0xc0009def20) (0xc00043d4a0) Stream removed, broadcasting: 1\nI0511 00:35:36.468497 1918 log.go:172] (0xc0009def20) Go away received\nI0511 00:35:36.468957 1918 log.go:172] (0xc0009def20) (0xc00043d4a0) Stream removed, broadcasting: 1\nI0511 00:35:36.468980 1918 log.go:172] (0xc0009def20) (0xc0002780a0) Stream removed, broadcasting: 3\nI0511 00:35:36.468992 1918 log.go:172] (0xc0009def20) (0xc000502500) Stream removed, broadcasting: 5\n" May 11 00:35:36.474: INFO: stdout: "\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-68nnr\naffinity-nodeport-transition-hwtpn\naffinity-nodeport-transition-68nnr\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-68nnr\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-hwtpn\naffinity-nodeport-transition-68nnr\naffinity-nodeport-transition-68nnr\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-hwtpn\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-hwtpn\naffinity-nodeport-transition-68nnr" May 11 00:35:36.474: INFO: Received response from host: May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-68nnr May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-hwtpn May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-68nnr May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-68nnr May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-hwtpn May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-68nnr May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-68nnr May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-hwtpn May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-hwtpn May 11 00:35:36.474: INFO: Received response from host: affinity-nodeport-transition-68nnr May 11 00:35:36.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-562 execpod-affinitydscrp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31098/ ; done' May 11 00:35:36.783: INFO: stderr: "I0511 00:35:36.644112 1936 log.go:172] (0xc0003a0d10) (0xc000b24320) Create stream\nI0511 00:35:36.644194 1936 log.go:172] (0xc0003a0d10) (0xc000b24320) Stream added, broadcasting: 1\nI0511 00:35:36.649721 1936 log.go:172] (0xc0003a0d10) Reply frame received for 1\nI0511 00:35:36.649760 1936 log.go:172] (0xc0003a0d10) (0xc000814dc0) Create stream\nI0511 00:35:36.649769 1936 log.go:172] (0xc0003a0d10) (0xc000814dc0) Stream added, broadcasting: 3\nI0511 00:35:36.650760 1936 log.go:172] (0xc0003a0d10) Reply frame received for 3\nI0511 00:35:36.650792 1936 log.go:172] (0xc0003a0d10) (0xc0007fabe0) Create stream\nI0511 00:35:36.650805 1936 log.go:172] (0xc0003a0d10) (0xc0007fabe0) Stream added, broadcasting: 5\nI0511 00:35:36.651654 1936 log.go:172] (0xc0003a0d10) Reply frame received for 5\nI0511 00:35:36.695309 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.695338 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.695352 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\nI0511 00:35:36.695384 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.695395 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.695406 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.699584 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.699615 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.699633 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.700733 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.700793 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.700836 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.700880 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.700907 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.700925 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.707478 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.707516 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.707542 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.708106 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.708146 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.708161 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.708191 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.708216 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.708239 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.713059 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.713081 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.713303 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.713925 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.713951 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.713964 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.713977 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.713998 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.714022 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.718996 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.719014 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.719037 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.719443 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.719476 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.719490 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.719513 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.719537 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.719555 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.723889 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.723933 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.723980 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.724300 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.724332 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.724345 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.724364 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.724374 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.724385 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.727866 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.727904 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.727946 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.728292 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.728318 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.728356 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.728378 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.728397 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.728414 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.732191 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.732220 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.732242 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.732531 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.732564 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.732575 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.732590 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.732599 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.732610 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.736241 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.736263 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.736282 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.736583 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.736611 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.736621 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.736637 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.736649 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.736678 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.740007 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.740030 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.740049 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.740478 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.740493 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.740503 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.740516 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.740550 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.740581 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.744615 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.744636 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.744655 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.745048 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.745090 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.745332 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.745359 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.745372 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.745384 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.749026 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.749058 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.749105 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.750060 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.750082 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.750095 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.750112 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.750123 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.750134 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.754345 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.754367 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.754398 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.754921 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.754949 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.754965 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.754994 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.755013 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.755043 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.758905 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.758930 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.758948 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.759374 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.759403 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.759426 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.759479 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.759499 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.759523 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\nI0511 00:35:36.763651 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.763680 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.763711 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.763996 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.764033 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.764081 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.764207 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.764231 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.764250 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.769380 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.769412 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.769437 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.769802 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.769823 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.769847 1936 log.go:172] (0xc0007fabe0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31098/\nI0511 00:35:36.769862 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.769887 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.769913 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.773938 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.773976 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.774006 1936 log.go:172] (0xc000814dc0) (3) Data frame sent\nI0511 00:35:36.774851 1936 log.go:172] (0xc0003a0d10) Data frame received for 3\nI0511 00:35:36.774875 1936 log.go:172] (0xc000814dc0) (3) Data frame handling\nI0511 00:35:36.775372 1936 log.go:172] (0xc0003a0d10) Data frame received for 5\nI0511 00:35:36.775410 1936 log.go:172] (0xc0007fabe0) (5) Data frame handling\nI0511 00:35:36.776810 1936 log.go:172] (0xc0003a0d10) Data frame received for 1\nI0511 00:35:36.776837 1936 log.go:172] (0xc000b24320) (1) Data frame handling\nI0511 00:35:36.776863 1936 log.go:172] (0xc000b24320) (1) Data frame sent\nI0511 00:35:36.776890 1936 log.go:172] (0xc0003a0d10) (0xc000b24320) Stream removed, broadcasting: 1\nI0511 00:35:36.776909 1936 log.go:172] (0xc0003a0d10) Go away received\nI0511 00:35:36.777658 1936 log.go:172] (0xc0003a0d10) (0xc000b24320) Stream removed, broadcasting: 1\nI0511 00:35:36.777686 1936 log.go:172] (0xc0003a0d10) (0xc000814dc0) Stream removed, broadcasting: 3\nI0511 00:35:36.777698 1936 log.go:172] (0xc0003a0d10) (0xc0007fabe0) Stream removed, broadcasting: 5\n" May 11 00:35:36.783: INFO: stdout: "\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb\naffinity-nodeport-transition-6r7kb" May 11 00:35:36.783: INFO: Received response from host: May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.783: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.784: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.784: INFO: Received response from host: affinity-nodeport-transition-6r7kb May 11 00:35:36.784: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-562, will wait for the garbage collector to delete the pods May 11 00:35:36.923: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.885669ms May 11 00:35:37.223: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 300.254044ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:35:54.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-562" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:31.008 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":149,"skipped":2491,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:35:55.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-cc5c138d-54a9-4290-a5dc-374e3d0f7cbd STEP: Creating a pod to test consume configMaps May 11 00:35:55.145: INFO: Waiting up to 5m0s for pod "pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03" in namespace "configmap-1615" to be "Succeeded or Failed" May 11 00:35:55.159: INFO: Pod "pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03": Phase="Pending", Reason="", readiness=false. Elapsed: 14.077678ms May 11 00:35:57.162: INFO: Pod "pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01741329s May 11 00:35:59.167: INFO: Pod "pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021676664s STEP: Saw pod success May 11 00:35:59.167: INFO: Pod "pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03" satisfied condition "Succeeded or Failed" May 11 00:35:59.170: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03 container configmap-volume-test: STEP: delete the pod May 11 00:35:59.226: INFO: Waiting for pod pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03 to disappear May 11 00:35:59.232: INFO: Pod pod-configmaps-09a9169e-f73c-41f0-86ad-15c56b309d03 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:35:59.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1615" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":150,"skipped":2502,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:35:59.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:36:05.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4635" for this suite. • [SLOW TEST:6.135 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:36:05.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:36:05.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf" in namespace "projected-5927" to be "Succeeded or Failed" May 11 00:36:05.538: INFO: Pod "downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.724576ms May 11 00:36:07.703: INFO: Pod "downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180674154s May 11 00:36:09.708: INFO: Pod "downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185070721s STEP: Saw pod success May 11 00:36:09.708: INFO: Pod "downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf" satisfied condition "Succeeded or Failed" May 11 00:36:09.711: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf container client-container: STEP: delete the pod May 11 00:36:09.844: INFO: Waiting for pod downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf to disappear May 11 00:36:09.891: INFO: Pod downwardapi-volume-28287ab2-60e2-4b30-8a2f-f25649e78fbf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:36:09.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5927" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2538,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:36:10.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 11 00:36:10.087: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:36:17.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4617" for this suite. • [SLOW TEST:7.541 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":153,"skipped":2559,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:36:17.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:36:17.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 11 00:36:18.278: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T00:36:18Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T00:36:18Z]] name:name1 resourceVersion:3219944 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:63965e10-bb8c-4ddb-859b-338c7e9e6970] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 11 00:36:28.315: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T00:36:28Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T00:36:28Z]] name:name2 resourceVersion:3219990 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6d0331a1-d5c3-4b4f-abc1-d99df1bbf6d1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 11 00:36:38.324: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T00:36:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T00:36:38Z]] name:name1 resourceVersion:3220019 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:63965e10-bb8c-4ddb-859b-338c7e9e6970] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 11 00:36:48.331: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T00:36:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T00:36:48Z]] name:name2 resourceVersion:3220054 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6d0331a1-d5c3-4b4f-abc1-d99df1bbf6d1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 11 00:36:58.338: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T00:36:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T00:36:38Z]] name:name1 resourceVersion:3220084 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:63965e10-bb8c-4ddb-859b-338c7e9e6970] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 11 00:37:08.344: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T00:36:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T00:36:48Z]] name:name2 resourceVersion:3220114 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6d0331a1-d5c3-4b4f-abc1-d99df1bbf6d1] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:37:18.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3587" for this suite. • [SLOW TEST:61.318 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":154,"skipped":2565,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:37:18.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-vr5m STEP: Creating a pod to test atomic-volume-subpath May 11 00:37:18.996: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vr5m" in namespace "subpath-3941" to be "Succeeded or Failed" May 11 00:37:19.014: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 17.229432ms May 11 00:37:21.018: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021699448s May 11 00:37:23.023: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 4.026334289s May 11 00:37:25.027: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 6.030230864s May 11 00:37:27.031: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 8.034805228s May 11 00:37:29.036: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 10.039216921s May 11 00:37:31.040: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 12.043600346s May 11 00:37:33.044: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 14.04790375s May 11 00:37:35.048: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 16.052033363s May 11 00:37:37.053: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 18.056777371s May 11 00:37:39.057: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 20.061033782s May 11 00:37:41.062: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Running", Reason="", readiness=true. Elapsed: 22.065638197s May 11 00:37:43.067: INFO: Pod "pod-subpath-test-configmap-vr5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070466645s STEP: Saw pod success May 11 00:37:43.067: INFO: Pod "pod-subpath-test-configmap-vr5m" satisfied condition "Succeeded or Failed" May 11 00:37:43.070: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-vr5m container test-container-subpath-configmap-vr5m: STEP: delete the pod May 11 00:37:43.155: INFO: Waiting for pod pod-subpath-test-configmap-vr5m to disappear May 11 00:37:43.163: INFO: Pod pod-subpath-test-configmap-vr5m no longer exists STEP: Deleting pod pod-subpath-test-configmap-vr5m May 11 00:37:43.163: INFO: Deleting pod "pod-subpath-test-configmap-vr5m" in namespace "subpath-3941" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:37:43.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3941" for this suite. • [SLOW TEST:24.310 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":155,"skipped":2575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:37:43.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:37:43.705: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:37:45.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754263, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754263, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754263, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754263, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:37:48.775: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:38:01.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-413" for this suite. STEP: Destroying namespace "webhook-413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.016 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":156,"skipped":2611,"failed":0} [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:38:01.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-f5773dc7-5d5d-4ade-98d0-7f68b3bec92a STEP: Creating secret with name s-test-opt-upd-e257aed2-4c6e-4644-9b35-5a0e441e87cc STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f5773dc7-5d5d-4ade-98d0-7f68b3bec92a STEP: Updating secret s-test-opt-upd-e257aed2-4c6e-4644-9b35-5a0e441e87cc STEP: Creating secret with name s-test-opt-create-07856247-c1f6-4073-8acc-8dd8504e97e3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:38:11.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7490" for this suite. • [SLOW TEST:10.314 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2611,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:38:11.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2de62d00-1a5b-4d56-9900-0329a22fb323 STEP: Creating a pod to test consume configMaps May 11 00:38:11.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a" in namespace "configmap-4276" to be "Succeeded or Failed" May 11 00:38:11.623: INFO: Pod "pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.726722ms May 11 00:38:13.734: INFO: Pod "pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137034088s May 11 00:38:15.743: INFO: Pod "pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145376202s STEP: Saw pod success May 11 00:38:15.743: INFO: Pod "pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a" satisfied condition "Succeeded or Failed" May 11 00:38:15.745: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a container configmap-volume-test: STEP: delete the pod May 11 00:38:15.808: INFO: Waiting for pod pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a to disappear May 11 00:38:15.845: INFO: Pod pod-configmaps-96fea231-8e4b-4c1e-8cee-359a18b19f2a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:38:15.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4276" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2631,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:38:15.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 11 00:38:16.114: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:38:21.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4086" for this suite. • [SLOW TEST:5.977 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":159,"skipped":2632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:38:21.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-bb54fa61-4291-4f82-9d14-e95eed4898ce in namespace container-probe-4234 May 11 00:38:26.216: INFO: Started pod busybox-bb54fa61-4291-4f82-9d14-e95eed4898ce in namespace container-probe-4234 STEP: checking the pod's current state and verifying that restartCount is present May 11 00:38:26.218: INFO: Initial restart count of pod busybox-bb54fa61-4291-4f82-9d14-e95eed4898ce is 0 May 11 00:39:22.441: INFO: Restart count of pod container-probe-4234/busybox-bb54fa61-4291-4f82-9d14-e95eed4898ce is now 1 (56.223080272s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:39:22.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4234" for this suite. • [SLOW TEST:60.674 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":160,"skipped":2726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:39:22.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:39:35.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4535" for this suite. • [SLOW TEST:13.324 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":161,"skipped":2762,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:39:35.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-b9f03b2c-3746-4729-b564-e9a5e540a46c in namespace container-probe-8171 May 11 00:39:39.965: INFO: Started pod test-webserver-b9f03b2c-3746-4729-b564-e9a5e540a46c in namespace container-probe-8171 STEP: checking the pod's current state and verifying that restartCount is present May 11 00:39:39.967: INFO: Initial restart count of pod test-webserver-b9f03b2c-3746-4729-b564-e9a5e540a46c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:43:40.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8171" for this suite. • [SLOW TEST:244.996 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2766,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:43:40.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:43:57.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6960" for this suite. • [SLOW TEST:16.768 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":163,"skipped":2805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:43:57.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8238 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8238 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8238 May 11 00:43:57.718: INFO: Found 0 stateful pods, waiting for 1 May 11 00:44:07.722: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 11 00:44:07.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 00:44:10.592: INFO: stderr: "I0511 00:44:10.460189 1956 log.go:172] (0xc00081d080) (0xc000853720) Create stream\nI0511 00:44:10.460249 1956 log.go:172] (0xc00081d080) (0xc000853720) Stream added, broadcasting: 1\nI0511 00:44:10.464138 1956 log.go:172] (0xc00081d080) Reply frame received for 1\nI0511 00:44:10.464183 1956 log.go:172] (0xc00081d080) (0xc000842f00) Create stream\nI0511 00:44:10.464192 1956 log.go:172] (0xc00081d080) (0xc000842f00) Stream added, broadcasting: 3\nI0511 00:44:10.465366 1956 log.go:172] (0xc00081d080) Reply frame received for 3\nI0511 00:44:10.465389 1956 log.go:172] (0xc00081d080) (0xc0008537c0) Create stream\nI0511 00:44:10.465398 1956 log.go:172] (0xc00081d080) (0xc0008537c0) Stream added, broadcasting: 5\nI0511 00:44:10.466460 1956 log.go:172] (0xc00081d080) Reply frame received for 5\nI0511 00:44:10.554482 1956 log.go:172] (0xc00081d080) Data frame received for 5\nI0511 00:44:10.554515 1956 log.go:172] (0xc0008537c0) (5) Data frame handling\nI0511 00:44:10.554537 1956 log.go:172] (0xc0008537c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 00:44:10.584057 1956 log.go:172] (0xc00081d080) Data frame received for 3\nI0511 00:44:10.584085 1956 log.go:172] (0xc000842f00) (3) Data frame handling\nI0511 00:44:10.584098 1956 log.go:172] (0xc000842f00) (3) Data frame sent\nI0511 00:44:10.584140 1956 log.go:172] (0xc00081d080) Data frame received for 5\nI0511 00:44:10.584178 1956 log.go:172] (0xc0008537c0) (5) Data frame handling\nI0511 00:44:10.584307 1956 log.go:172] (0xc00081d080) Data frame received for 3\nI0511 00:44:10.584324 1956 log.go:172] (0xc000842f00) (3) Data frame handling\nI0511 00:44:10.586391 1956 log.go:172] (0xc00081d080) Data frame received for 1\nI0511 00:44:10.586427 1956 log.go:172] (0xc000853720) (1) Data frame handling\nI0511 00:44:10.586469 1956 log.go:172] (0xc000853720) (1) Data frame sent\nI0511 00:44:10.586495 1956 log.go:172] (0xc00081d080) (0xc000853720) Stream removed, broadcasting: 1\nI0511 00:44:10.586538 1956 log.go:172] (0xc00081d080) Go away received\nI0511 00:44:10.586805 1956 log.go:172] (0xc00081d080) (0xc000853720) Stream removed, broadcasting: 1\nI0511 00:44:10.586821 1956 log.go:172] (0xc00081d080) (0xc000842f00) Stream removed, broadcasting: 3\nI0511 00:44:10.586828 1956 log.go:172] (0xc00081d080) (0xc0008537c0) Stream removed, broadcasting: 5\n" May 11 00:44:10.592: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 00:44:10.592: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 00:44:10.596: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 00:44:20.599: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 00:44:20.599: INFO: Waiting for statefulset status.replicas updated to 0 May 11 00:44:20.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999703s May 11 00:44:21.647: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980443333s May 11 00:44:22.653: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.95941s May 11 00:44:23.658: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.953669675s May 11 00:44:24.683: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.948969458s May 11 00:44:25.688: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.923577357s May 11 00:44:26.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.918680577s May 11 00:44:27.698: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.913630865s May 11 00:44:28.703: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.908364014s May 11 00:44:29.707: INFO: Verifying statefulset ss doesn't scale past 1 for another 904.143016ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8238 May 11 00:44:30.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 00:44:30.972: INFO: stderr: "I0511 00:44:30.874907 1987 log.go:172] (0xc000ab16b0) (0xc0009f6500) Create stream\nI0511 00:44:30.874988 1987 log.go:172] (0xc000ab16b0) (0xc0009f6500) Stream added, broadcasting: 1\nI0511 00:44:30.878978 1987 log.go:172] (0xc000ab16b0) Reply frame received for 1\nI0511 00:44:30.879043 1987 log.go:172] (0xc000ab16b0) (0xc000516140) Create stream\nI0511 00:44:30.879061 1987 log.go:172] (0xc000ab16b0) (0xc000516140) Stream added, broadcasting: 3\nI0511 00:44:30.879973 1987 log.go:172] (0xc000ab16b0) Reply frame received for 3\nI0511 00:44:30.880001 1987 log.go:172] (0xc000ab16b0) (0xc000474c80) Create stream\nI0511 00:44:30.880009 1987 log.go:172] (0xc000ab16b0) (0xc000474c80) Stream added, broadcasting: 5\nI0511 00:44:30.880761 1987 log.go:172] (0xc000ab16b0) Reply frame received for 5\nI0511 00:44:30.950908 1987 log.go:172] (0xc000ab16b0) Data frame received for 5\nI0511 00:44:30.951057 1987 log.go:172] (0xc000474c80) (5) Data frame handling\nI0511 00:44:30.951092 1987 log.go:172] (0xc000474c80) (5) Data frame sent\nI0511 00:44:30.951116 1987 log.go:172] (0xc000ab16b0) Data frame received for 5\nI0511 00:44:30.951136 1987 log.go:172] (0xc000474c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 00:44:30.951159 1987 log.go:172] (0xc000ab16b0) Data frame received for 3\nI0511 00:44:30.951176 1987 log.go:172] (0xc000516140) (3) Data frame handling\nI0511 00:44:30.951204 1987 log.go:172] (0xc000516140) (3) Data frame sent\nI0511 00:44:30.951225 1987 log.go:172] (0xc000ab16b0) Data frame received for 3\nI0511 00:44:30.951249 1987 log.go:172] (0xc000516140) (3) Data frame handling\nI0511 00:44:30.966606 1987 log.go:172] (0xc000ab16b0) Data frame received for 1\nI0511 00:44:30.966630 1987 log.go:172] (0xc0009f6500) (1) Data frame handling\nI0511 00:44:30.966648 1987 log.go:172] (0xc0009f6500) (1) Data frame sent\nI0511 00:44:30.966660 1987 log.go:172] (0xc000ab16b0) (0xc0009f6500) Stream removed, broadcasting: 1\nI0511 00:44:30.966981 1987 log.go:172] (0xc000ab16b0) (0xc0009f6500) Stream removed, broadcasting: 1\nI0511 00:44:30.966997 1987 log.go:172] (0xc000ab16b0) (0xc000516140) Stream removed, broadcasting: 3\nI0511 00:44:30.967232 1987 log.go:172] (0xc000ab16b0) (0xc000474c80) Stream removed, broadcasting: 5\nI0511 00:44:30.967481 1987 log.go:172] (0xc000ab16b0) Go away received\n" May 11 00:44:30.972: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 00:44:30.972: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 00:44:30.976: INFO: Found 1 stateful pods, waiting for 3 May 11 00:44:40.982: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 00:44:40.982: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 00:44:40.982: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 11 00:44:40.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 00:44:41.225: INFO: stderr: "I0511 00:44:41.139102 2008 log.go:172] (0xc0000ec370) (0xc000548460) Create stream\nI0511 00:44:41.139156 2008 log.go:172] (0xc0000ec370) (0xc000548460) Stream added, broadcasting: 1\nI0511 00:44:41.140547 2008 log.go:172] (0xc0000ec370) Reply frame received for 1\nI0511 00:44:41.140586 2008 log.go:172] (0xc0000ec370) (0xc0004dcfa0) Create stream\nI0511 00:44:41.140595 2008 log.go:172] (0xc0000ec370) (0xc0004dcfa0) Stream added, broadcasting: 3\nI0511 00:44:41.141621 2008 log.go:172] (0xc0000ec370) Reply frame received for 3\nI0511 00:44:41.141655 2008 log.go:172] (0xc0000ec370) (0xc00043aa00) Create stream\nI0511 00:44:41.141665 2008 log.go:172] (0xc0000ec370) (0xc00043aa00) Stream added, broadcasting: 5\nI0511 00:44:41.142542 2008 log.go:172] (0xc0000ec370) Reply frame received for 5\nI0511 00:44:41.219120 2008 log.go:172] (0xc0000ec370) Data frame received for 5\nI0511 00:44:41.219155 2008 log.go:172] (0xc00043aa00) (5) Data frame handling\nI0511 00:44:41.219164 2008 log.go:172] (0xc00043aa00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 00:44:41.219170 2008 log.go:172] (0xc0000ec370) Data frame received for 5\nI0511 00:44:41.219191 2008 log.go:172] (0xc00043aa00) (5) Data frame handling\nI0511 00:44:41.219205 2008 log.go:172] (0xc0000ec370) Data frame received for 3\nI0511 00:44:41.219210 2008 log.go:172] (0xc0004dcfa0) (3) Data frame handling\nI0511 00:44:41.219216 2008 log.go:172] (0xc0004dcfa0) (3) Data frame sent\nI0511 00:44:41.219221 2008 log.go:172] (0xc0000ec370) Data frame received for 3\nI0511 00:44:41.219225 2008 log.go:172] (0xc0004dcfa0) (3) Data frame handling\nI0511 00:44:41.220331 2008 log.go:172] (0xc0000ec370) Data frame received for 1\nI0511 00:44:41.220360 2008 log.go:172] (0xc000548460) (1) Data frame handling\nI0511 00:44:41.220385 2008 log.go:172] (0xc000548460) (1) Data frame sent\nI0511 00:44:41.220405 2008 log.go:172] (0xc0000ec370) (0xc000548460) Stream removed, broadcasting: 1\nI0511 00:44:41.220425 2008 log.go:172] (0xc0000ec370) Go away received\nI0511 00:44:41.220770 2008 log.go:172] (0xc0000ec370) (0xc000548460) Stream removed, broadcasting: 1\nI0511 00:44:41.220785 2008 log.go:172] (0xc0000ec370) (0xc0004dcfa0) Stream removed, broadcasting: 3\nI0511 00:44:41.220800 2008 log.go:172] (0xc0000ec370) (0xc00043aa00) Stream removed, broadcasting: 5\n" May 11 00:44:41.225: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 00:44:41.225: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 00:44:41.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 00:44:41.490: INFO: stderr: "I0511 00:44:41.359444 2028 log.go:172] (0xc0006b6160) (0xc0006d4500) Create stream\nI0511 00:44:41.359505 2028 log.go:172] (0xc0006b6160) (0xc0006d4500) Stream added, broadcasting: 1\nI0511 00:44:41.362342 2028 log.go:172] (0xc0006b6160) Reply frame received for 1\nI0511 00:44:41.362392 2028 log.go:172] (0xc0006b6160) (0xc0006d4e60) Create stream\nI0511 00:44:41.362417 2028 log.go:172] (0xc0006b6160) (0xc0006d4e60) Stream added, broadcasting: 3\nI0511 00:44:41.363375 2028 log.go:172] (0xc0006b6160) Reply frame received for 3\nI0511 00:44:41.363430 2028 log.go:172] (0xc0006b6160) (0xc000554140) Create stream\nI0511 00:44:41.363454 2028 log.go:172] (0xc0006b6160) (0xc000554140) Stream added, broadcasting: 5\nI0511 00:44:41.364436 2028 log.go:172] (0xc0006b6160) Reply frame received for 5\nI0511 00:44:41.433711 2028 log.go:172] (0xc0006b6160) Data frame received for 5\nI0511 00:44:41.433755 2028 log.go:172] (0xc000554140) (5) Data frame handling\nI0511 00:44:41.433789 2028 log.go:172] (0xc000554140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 00:44:41.482378 2028 log.go:172] (0xc0006b6160) Data frame received for 3\nI0511 00:44:41.482414 2028 log.go:172] (0xc0006d4e60) (3) Data frame handling\nI0511 00:44:41.482436 2028 log.go:172] (0xc0006d4e60) (3) Data frame sent\nI0511 00:44:41.482551 2028 log.go:172] (0xc0006b6160) Data frame received for 3\nI0511 00:44:41.482585 2028 log.go:172] (0xc0006d4e60) (3) Data frame handling\nI0511 00:44:41.482609 2028 log.go:172] (0xc0006b6160) Data frame received for 5\nI0511 00:44:41.482632 2028 log.go:172] (0xc000554140) (5) Data frame handling\nI0511 00:44:41.484774 2028 log.go:172] (0xc0006b6160) Data frame received for 1\nI0511 00:44:41.484795 2028 log.go:172] (0xc0006d4500) (1) Data frame handling\nI0511 00:44:41.484808 2028 log.go:172] (0xc0006d4500) (1) Data frame sent\nI0511 00:44:41.484838 2028 log.go:172] (0xc0006b6160) (0xc0006d4500) Stream removed, broadcasting: 1\nI0511 00:44:41.485277 2028 log.go:172] (0xc0006b6160) (0xc0006d4500) Stream removed, broadcasting: 1\nI0511 00:44:41.485289 2028 log.go:172] (0xc0006b6160) (0xc0006d4e60) Stream removed, broadcasting: 3\nI0511 00:44:41.485388 2028 log.go:172] (0xc0006b6160) (0xc000554140) Stream removed, broadcasting: 5\n" May 11 00:44:41.490: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 00:44:41.490: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 00:44:41.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 00:44:41.779: INFO: stderr: "I0511 00:44:41.622602 2048 log.go:172] (0xc000a416b0) (0xc000832500) Create stream\nI0511 00:44:41.622650 2048 log.go:172] (0xc000a416b0) (0xc000832500) Stream added, broadcasting: 1\nI0511 00:44:41.627542 2048 log.go:172] (0xc000a416b0) Reply frame received for 1\nI0511 00:44:41.627573 2048 log.go:172] (0xc000a416b0) (0xc00082d400) Create stream\nI0511 00:44:41.627588 2048 log.go:172] (0xc000a416b0) (0xc00082d400) Stream added, broadcasting: 3\nI0511 00:44:41.628571 2048 log.go:172] (0xc000a416b0) Reply frame received for 3\nI0511 00:44:41.628620 2048 log.go:172] (0xc000a416b0) (0xc0005a8140) Create stream\nI0511 00:44:41.628633 2048 log.go:172] (0xc000a416b0) (0xc0005a8140) Stream added, broadcasting: 5\nI0511 00:44:41.629995 2048 log.go:172] (0xc000a416b0) Reply frame received for 5\nI0511 00:44:41.698997 2048 log.go:172] (0xc000a416b0) Data frame received for 5\nI0511 00:44:41.699027 2048 log.go:172] (0xc0005a8140) (5) Data frame handling\nI0511 00:44:41.699050 2048 log.go:172] (0xc0005a8140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 00:44:41.770674 2048 log.go:172] (0xc000a416b0) Data frame received for 3\nI0511 00:44:41.770707 2048 log.go:172] (0xc00082d400) (3) Data frame handling\nI0511 00:44:41.770728 2048 log.go:172] (0xc00082d400) (3) Data frame sent\nI0511 00:44:41.771236 2048 log.go:172] (0xc000a416b0) Data frame received for 5\nI0511 00:44:41.771260 2048 log.go:172] (0xc0005a8140) (5) Data frame handling\nI0511 00:44:41.771283 2048 log.go:172] (0xc000a416b0) Data frame received for 3\nI0511 00:44:41.771297 2048 log.go:172] (0xc00082d400) (3) Data frame handling\nI0511 00:44:41.773939 2048 log.go:172] (0xc000a416b0) Data frame received for 1\nI0511 00:44:41.773967 2048 log.go:172] (0xc000832500) (1) Data frame handling\nI0511 00:44:41.773988 2048 log.go:172] (0xc000832500) (1) Data frame sent\nI0511 00:44:41.774018 2048 log.go:172] (0xc000a416b0) (0xc000832500) Stream removed, broadcasting: 1\nI0511 00:44:41.774038 2048 log.go:172] (0xc000a416b0) Go away received\nI0511 00:44:41.774333 2048 log.go:172] (0xc000a416b0) (0xc000832500) Stream removed, broadcasting: 1\nI0511 00:44:41.774352 2048 log.go:172] (0xc000a416b0) (0xc00082d400) Stream removed, broadcasting: 3\nI0511 00:44:41.774364 2048 log.go:172] (0xc000a416b0) (0xc0005a8140) Stream removed, broadcasting: 5\n" May 11 00:44:41.779: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 00:44:41.779: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 00:44:41.779: INFO: Waiting for statefulset status.replicas updated to 0 May 11 00:44:41.782: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 00:44:51.790: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 00:44:51.790: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 00:44:51.790: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 00:44:51.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999452s May 11 00:44:52.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992596289s May 11 00:44:53.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987090373s May 11 00:44:54.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981351456s May 11 00:44:55.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976007758s May 11 00:44:56.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971949211s May 11 00:44:57.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965958699s May 11 00:44:58.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961095358s May 11 00:44:59.847: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956305879s May 11 00:45:00.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.503808ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8238 May 11 00:45:01.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 00:45:02.104: INFO: stderr: "I0511 00:45:01.995416 2070 log.go:172] (0xc000afd290) (0xc00072bf40) Create stream\nI0511 00:45:01.995500 2070 log.go:172] (0xc000afd290) (0xc00072bf40) Stream added, broadcasting: 1\nI0511 00:45:02.000570 2070 log.go:172] (0xc000afd290) Reply frame received for 1\nI0511 00:45:02.000614 2070 log.go:172] (0xc000afd290) (0xc00047adc0) Create stream\nI0511 00:45:02.000626 2070 log.go:172] (0xc000afd290) (0xc00047adc0) Stream added, broadcasting: 3\nI0511 00:45:02.001696 2070 log.go:172] (0xc000afd290) Reply frame received for 3\nI0511 00:45:02.001745 2070 log.go:172] (0xc000afd290) (0xc000478500) Create stream\nI0511 00:45:02.001763 2070 log.go:172] (0xc000afd290) (0xc000478500) Stream added, broadcasting: 5\nI0511 00:45:02.002526 2070 log.go:172] (0xc000afd290) Reply frame received for 5\nI0511 00:45:02.097380 2070 log.go:172] (0xc000afd290) Data frame received for 5\nI0511 00:45:02.097424 2070 log.go:172] (0xc000478500) (5) Data frame handling\nI0511 00:45:02.097435 2070 log.go:172] (0xc000478500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 00:45:02.097451 2070 log.go:172] (0xc000afd290) Data frame received for 5\nI0511 00:45:02.097464 2070 log.go:172] (0xc000478500) (5) Data frame handling\nI0511 00:45:02.097481 2070 log.go:172] (0xc000afd290) Data frame received for 3\nI0511 00:45:02.097492 2070 log.go:172] (0xc00047adc0) (3) Data frame handling\nI0511 00:45:02.097499 2070 log.go:172] (0xc00047adc0) (3) Data frame sent\nI0511 00:45:02.097508 2070 log.go:172] (0xc000afd290) Data frame received for 3\nI0511 00:45:02.097514 2070 log.go:172] (0xc00047adc0) (3) Data frame handling\nI0511 00:45:02.099047 2070 log.go:172] (0xc000afd290) Data frame received for 1\nI0511 00:45:02.099070 2070 log.go:172] (0xc00072bf40) (1) Data frame handling\nI0511 00:45:02.099087 2070 log.go:172] (0xc00072bf40) (1) Data frame sent\nI0511 00:45:02.099099 2070 log.go:172] (0xc000afd290) (0xc00072bf40) Stream removed, broadcasting: 1\nI0511 00:45:02.099113 2070 log.go:172] (0xc000afd290) Go away received\nI0511 00:45:02.099499 2070 log.go:172] (0xc000afd290) (0xc00072bf40) Stream removed, broadcasting: 1\nI0511 00:45:02.099516 2070 log.go:172] (0xc000afd290) (0xc00047adc0) Stream removed, broadcasting: 3\nI0511 00:45:02.099523 2070 log.go:172] (0xc000afd290) (0xc000478500) Stream removed, broadcasting: 5\n" May 11 00:45:02.104: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 00:45:02.104: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 00:45:02.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 00:45:02.321: INFO: stderr: "I0511 00:45:02.234737 2091 log.go:172] (0xc000938e70) (0xc00082cf00) Create stream\nI0511 00:45:02.234783 2091 log.go:172] (0xc000938e70) (0xc00082cf00) Stream added, broadcasting: 1\nI0511 00:45:02.236904 2091 log.go:172] (0xc000938e70) Reply frame received for 1\nI0511 00:45:02.236932 2091 log.go:172] (0xc000938e70) (0xc00063f900) Create stream\nI0511 00:45:02.236940 2091 log.go:172] (0xc000938e70) (0xc00063f900) Stream added, broadcasting: 3\nI0511 00:45:02.237938 2091 log.go:172] (0xc000938e70) Reply frame received for 3\nI0511 00:45:02.237985 2091 log.go:172] (0xc000938e70) (0xc00082d4a0) Create stream\nI0511 00:45:02.238009 2091 log.go:172] (0xc000938e70) (0xc00082d4a0) Stream added, broadcasting: 5\nI0511 00:45:02.238927 2091 log.go:172] (0xc000938e70) Reply frame received for 5\nI0511 00:45:02.313531 2091 log.go:172] (0xc000938e70) Data frame received for 5\nI0511 00:45:02.313558 2091 log.go:172] (0xc00082d4a0) (5) Data frame handling\nI0511 00:45:02.313569 2091 log.go:172] (0xc00082d4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 00:45:02.313592 2091 log.go:172] (0xc000938e70) Data frame received for 3\nI0511 00:45:02.313618 2091 log.go:172] (0xc00063f900) (3) Data frame handling\nI0511 00:45:02.313640 2091 log.go:172] (0xc00063f900) (3) Data frame sent\nI0511 00:45:02.313664 2091 log.go:172] (0xc000938e70) Data frame received for 3\nI0511 00:45:02.313684 2091 log.go:172] (0xc00063f900) (3) Data frame handling\nI0511 00:45:02.313714 2091 log.go:172] (0xc000938e70) Data frame received for 5\nI0511 00:45:02.313734 2091 log.go:172] (0xc00082d4a0) (5) Data frame handling\nI0511 00:45:02.315273 2091 log.go:172] (0xc000938e70) Data frame received for 1\nI0511 00:45:02.315285 2091 log.go:172] (0xc00082cf00) (1) Data frame handling\nI0511 00:45:02.315291 2091 log.go:172] (0xc00082cf00) (1) Data frame sent\nI0511 00:45:02.315306 2091 log.go:172] (0xc000938e70) (0xc00082cf00) Stream removed, broadcasting: 1\nI0511 00:45:02.315483 2091 log.go:172] (0xc000938e70) Go away received\nI0511 00:45:02.316115 2091 log.go:172] (0xc000938e70) (0xc00082cf00) Stream removed, broadcasting: 1\nI0511 00:45:02.316136 2091 log.go:172] (0xc000938e70) (0xc00063f900) Stream removed, broadcasting: 3\nI0511 00:45:02.316147 2091 log.go:172] (0xc000938e70) (0xc00082d4a0) Stream removed, broadcasting: 5\n" May 11 00:45:02.321: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 00:45:02.321: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 00:45:02.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8238 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 00:45:02.535: INFO: stderr: "I0511 00:45:02.451801 2111 log.go:172] (0xc000b333f0) (0xc00065d0e0) Create stream\nI0511 00:45:02.451852 2111 log.go:172] (0xc000b333f0) (0xc00065d0e0) Stream added, broadcasting: 1\nI0511 00:45:02.458161 2111 log.go:172] (0xc000b333f0) Reply frame received for 1\nI0511 00:45:02.458214 2111 log.go:172] (0xc000b333f0) (0xc000613e00) Create stream\nI0511 00:45:02.458226 2111 log.go:172] (0xc000b333f0) (0xc000613e00) Stream added, broadcasting: 3\nI0511 00:45:02.459481 2111 log.go:172] (0xc000b333f0) Reply frame received for 3\nI0511 00:45:02.459547 2111 log.go:172] (0xc000b333f0) (0xc0005bc6e0) Create stream\nI0511 00:45:02.459560 2111 log.go:172] (0xc000b333f0) (0xc0005bc6e0) Stream added, broadcasting: 5\nI0511 00:45:02.460771 2111 log.go:172] (0xc000b333f0) Reply frame received for 5\nI0511 00:45:02.521810 2111 log.go:172] (0xc000b333f0) Data frame received for 5\nI0511 00:45:02.521856 2111 log.go:172] (0xc0005bc6e0) (5) Data frame handling\nI0511 00:45:02.521902 2111 log.go:172] (0xc0005bc6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 00:45:02.523447 2111 log.go:172] (0xc000b333f0) Data frame received for 3\nI0511 00:45:02.523475 2111 log.go:172] (0xc000613e00) (3) Data frame handling\nI0511 00:45:02.523491 2111 log.go:172] (0xc000613e00) (3) Data frame sent\nI0511 00:45:02.523513 2111 log.go:172] (0xc000b333f0) Data frame received for 3\nI0511 00:45:02.523536 2111 log.go:172] (0xc000613e00) (3) Data frame handling\nI0511 00:45:02.523660 2111 log.go:172] (0xc000b333f0) Data frame received for 5\nI0511 00:45:02.523682 2111 log.go:172] (0xc0005bc6e0) (5) Data frame handling\nI0511 00:45:02.525653 2111 log.go:172] (0xc000b333f0) Data frame received for 1\nI0511 00:45:02.525676 2111 log.go:172] (0xc00065d0e0) (1) Data frame handling\nI0511 00:45:02.525696 2111 log.go:172] (0xc00065d0e0) (1) Data frame sent\nI0511 00:45:02.525712 2111 log.go:172] (0xc000b333f0) (0xc00065d0e0) Stream removed, broadcasting: 1\nI0511 00:45:02.525790 2111 log.go:172] (0xc000b333f0) Go away received\nI0511 00:45:02.526022 2111 log.go:172] (0xc000b333f0) (0xc00065d0e0) Stream removed, broadcasting: 1\nI0511 00:45:02.526050 2111 log.go:172] (0xc000b333f0) (0xc000613e00) Stream removed, broadcasting: 3\nI0511 00:45:02.526064 2111 log.go:172] (0xc000b333f0) (0xc0005bc6e0) Stream removed, broadcasting: 5\n" May 11 00:45:02.535: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 00:45:02.535: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 00:45:02.535: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 00:45:32.559: INFO: Deleting all statefulset in ns statefulset-8238 May 11 00:45:32.562: INFO: Scaling statefulset ss to 0 May 11 00:45:32.570: INFO: Waiting for statefulset status.replicas updated to 0 May 11 00:45:32.572: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:45:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8238" for this suite. • [SLOW TEST:95.077 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":164,"skipped":2832,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:45:32.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:45:33.131: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:45:35.141: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754733, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754733, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754733, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754733, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:45:38.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:45:48.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9261" for this suite. STEP: Destroying namespace "webhook-9261-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.978 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":165,"skipped":2834,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:45:48.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-hl5f STEP: Creating a pod to test atomic-volume-subpath May 11 00:45:48.815: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hl5f" in namespace "subpath-9062" to be "Succeeded or Failed" May 11 00:45:48.848: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.426101ms May 11 00:45:51.002: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186955912s May 11 00:45:53.007: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 4.191965126s May 11 00:45:55.012: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 6.196872276s May 11 00:45:57.016: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 8.201268282s May 11 00:45:59.021: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 10.205583302s May 11 00:46:01.026: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 12.210523901s May 11 00:46:03.029: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 14.214328664s May 11 00:46:05.034: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 16.218506892s May 11 00:46:07.038: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 18.223053655s May 11 00:46:09.042: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 20.226836628s May 11 00:46:11.047: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Running", Reason="", readiness=true. Elapsed: 22.231800336s May 11 00:46:13.051: INFO: Pod "pod-subpath-test-configmap-hl5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.235581597s STEP: Saw pod success May 11 00:46:13.051: INFO: Pod "pod-subpath-test-configmap-hl5f" satisfied condition "Succeeded or Failed" May 11 00:46:13.054: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-hl5f container test-container-subpath-configmap-hl5f: STEP: delete the pod May 11 00:46:13.117: INFO: Waiting for pod pod-subpath-test-configmap-hl5f to disappear May 11 00:46:13.123: INFO: Pod pod-subpath-test-configmap-hl5f no longer exists STEP: Deleting pod pod-subpath-test-configmap-hl5f May 11 00:46:13.123: INFO: Deleting pod "pod-subpath-test-configmap-hl5f" in namespace "subpath-9062" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:13.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9062" for this suite. • [SLOW TEST:24.482 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":166,"skipped":2840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:13.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:13.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9333" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":167,"skipped":2872,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:13.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 11 00:46:13.479: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:21.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7587" for this suite. • [SLOW TEST:8.170 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":168,"skipped":2874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:21.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:46:21.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d" in namespace "projected-1845" to be "Succeeded or Failed" May 11 00:46:21.748: INFO: Pod "downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.117043ms May 11 00:46:23.822: INFO: Pod "downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076996471s May 11 00:46:25.846: INFO: Pod "downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101177229s STEP: Saw pod success May 11 00:46:25.846: INFO: Pod "downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d" satisfied condition "Succeeded or Failed" May 11 00:46:25.850: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d container client-container: STEP: delete the pod May 11 00:46:25.891: INFO: Waiting for pod downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d to disappear May 11 00:46:25.902: INFO: Pod downwardapi-volume-dc2da165-861f-4754-99df-a9d5f022f49d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:25.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1845" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2951,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:25.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ce069867-f443-494b-9dc4-de39ee77bbac STEP: Creating a pod to test consume secrets May 11 00:46:25.971: INFO: Waiting up to 5m0s for pod "pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf" in namespace "secrets-8360" to be "Succeeded or Failed" May 11 00:46:25.988: INFO: Pod "pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.966399ms May 11 00:46:28.037: INFO: Pod "pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066357752s May 11 00:46:30.041: INFO: Pod "pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070475473s STEP: Saw pod success May 11 00:46:30.042: INFO: Pod "pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf" satisfied condition "Succeeded or Failed" May 11 00:46:30.044: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf container secret-volume-test: STEP: delete the pod May 11 00:46:30.111: INFO: Waiting for pod pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf to disappear May 11 00:46:30.120: INFO: Pod pod-secrets-167ced34-6ac5-4766-86c6-05eb64900edf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:30.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8360" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2953,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:30.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:30.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5804" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2969,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:30.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:46:30.408: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 11 00:46:35.411: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 00:46:35.411: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 11 00:46:37.415: INFO: Creating deployment "test-rollover-deployment" May 11 00:46:37.444: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 11 00:46:39.452: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 11 00:46:39.459: INFO: Ensure that both replica sets have 1 created replica May 11 00:46:39.466: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 11 00:46:39.473: INFO: Updating deployment test-rollover-deployment May 11 00:46:39.473: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 11 00:46:41.483: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 11 00:46:41.489: INFO: Make sure deployment "test-rollover-deployment" is complete May 11 00:46:41.495: INFO: all replica sets need to contain the pod-template-hash label May 11 00:46:41.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754799, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:46:43.504: INFO: all replica sets need to contain the pod-template-hash label May 11 00:46:43.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754803, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:46:45.503: INFO: all replica sets need to contain the pod-template-hash label May 11 00:46:45.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754803, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:46:47.503: INFO: all replica sets need to contain the pod-template-hash label May 11 00:46:47.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754803, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:46:49.504: INFO: all replica sets need to contain the pod-template-hash label May 11 00:46:49.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754803, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:46:51.503: INFO: all replica sets need to contain the pod-template-hash label May 11 00:46:51.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754803, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 00:46:53.504: INFO: May 11 00:46:53.504: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 00:46:53.513: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8598 /apis/apps/v1/namespaces/deployment-8598/deployments/test-rollover-deployment 57c114da-7242-41bc-b039-4f512e7302fc 3222653 2 2020-05-11 00:46:37 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-11 00:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 00:46:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ac8eb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 00:46:37 +0000 UTC,LastTransitionTime:2020-05-11 00:46:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-11 00:46:53 +0000 UTC,LastTransitionTime:2020-05-11 00:46:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 00:46:53.517: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-8598 /apis/apps/v1/namespaces/deployment-8598/replicasets/test-rollover-deployment-7c4fd9c879 f16b1ee3-36b7-41e2-ab3c-e3efc37b4e73 3222641 2 2020-05-11 00:46:39 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 57c114da-7242-41bc-b039-4f512e7302fc 0xc004ac9737 0xc004ac9738}] [] [{kube-controller-manager Update apps/v1 2020-05-11 00:46:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"57c114da-7242-41bc-b039-4f512e7302fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ac97c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 00:46:53.517: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 11 00:46:53.518: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8598 /apis/apps/v1/namespaces/deployment-8598/replicasets/test-rollover-controller 633868db-8b59-445d-b035-02f0039413b7 3222651 2 2020-05-11 00:46:30 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 57c114da-7242-41bc-b039-4f512e7302fc 0xc004ac9527 0xc004ac9528}] [] [{e2e.test Update apps/v1 2020-05-11 00:46:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 00:46:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"57c114da-7242-41bc-b039-4f512e7302fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004ac95c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 00:46:53.518: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-8598 /apis/apps/v1/namespaces/deployment-8598/replicasets/test-rollover-deployment-5686c4cfd5 c7143c13-e88c-450d-8c55-d8958b6dbbb3 3222585 2 2020-05-11 00:46:37 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 57c114da-7242-41bc-b039-4f512e7302fc 0xc004ac9637 0xc004ac9638}] [] [{kube-controller-manager Update apps/v1 2020-05-11 00:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"57c114da-7242-41bc-b039-4f512e7302fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ac96c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 00:46:53.521: INFO: Pod "test-rollover-deployment-7c4fd9c879-xp2w6" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-xp2w6 test-rollover-deployment-7c4fd9c879- deployment-8598 /api/v1/namespaces/deployment-8598/pods/test-rollover-deployment-7c4fd9c879-xp2w6 f0e3ea21-c81e-41a6-a5e8-b2898569254f 3222611 0 2020-05-11 00:46:39 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 f16b1ee3-36b7-41e2-ab3c-e3efc37b4e73 0xc004ac9d67 0xc004ac9d68}] [] [{kube-controller-manager Update v1 2020-05-11 00:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f16b1ee3-36b7-41e2-ab3c-e3efc37b4e73\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 00:46:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w9kd8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w9kd8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w9kd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:46:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:46:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:46:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 00:46:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.115,StartTime:2020-05-11 00:46:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 00:46:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://40bf8fbea8140d6c03c0ae08a76fedaa84dced888f6f10b03552e47042fa4567,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:53.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8598" for this suite. • [SLOW TEST:23.254 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":172,"skipped":3010,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:53.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 00:46:53.832: INFO: Waiting up to 5m0s for pod "pod-d5332111-4ac1-444c-bf48-4e881ce3a570" in namespace "emptydir-2483" to be "Succeeded or Failed" May 11 00:46:53.848: INFO: Pod "pod-d5332111-4ac1-444c-bf48-4e881ce3a570": Phase="Pending", Reason="", readiness=false. Elapsed: 16.074572ms May 11 00:46:55.851: INFO: Pod "pod-d5332111-4ac1-444c-bf48-4e881ce3a570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019539286s May 11 00:46:57.856: INFO: Pod "pod-d5332111-4ac1-444c-bf48-4e881ce3a570": Phase="Running", Reason="", readiness=true. Elapsed: 4.024045399s May 11 00:46:59.860: INFO: Pod "pod-d5332111-4ac1-444c-bf48-4e881ce3a570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02843949s STEP: Saw pod success May 11 00:46:59.860: INFO: Pod "pod-d5332111-4ac1-444c-bf48-4e881ce3a570" satisfied condition "Succeeded or Failed" May 11 00:46:59.864: INFO: Trying to get logs from node latest-worker pod pod-d5332111-4ac1-444c-bf48-4e881ce3a570 container test-container: STEP: delete the pod May 11 00:46:59.897: INFO: Waiting for pod pod-d5332111-4ac1-444c-bf48-4e881ce3a570 to disappear May 11 00:46:59.930: INFO: Pod pod-d5332111-4ac1-444c-bf48-4e881ce3a570 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:46:59.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2483" for this suite. • [SLOW TEST:6.410 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":3021,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:46:59.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:47:00.404: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:47:02.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754820, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754820, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754820, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724754820, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:47:05.455: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:47:05.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2540-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:47:06.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2994" for this suite. STEP: Destroying namespace "webhook-2994-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.786 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":174,"skipped":3034,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:47:06.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-65f2128f-0585-47a8-b2ec-57028632bd66 STEP: Creating a pod to test consume configMaps May 11 00:47:06.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207" in namespace "configmap-82" to be "Succeeded or Failed" May 11 00:47:06.877: INFO: Pod "pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207": Phase="Pending", Reason="", readiness=false. Elapsed: 29.333464ms May 11 00:47:08.880: INFO: Pod "pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033234386s May 11 00:47:10.884: INFO: Pod "pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037282279s STEP: Saw pod success May 11 00:47:10.885: INFO: Pod "pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207" satisfied condition "Succeeded or Failed" May 11 00:47:10.887: INFO: Trying to get logs from node latest-worker pod pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207 container configmap-volume-test: STEP: delete the pod May 11 00:47:10.921: INFO: Waiting for pod pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207 to disappear May 11 00:47:10.936: INFO: Pod pod-configmaps-bbe0cbfc-d594-4076-b57e-b3d50b28a207 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:47:10.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-82" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":3048,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:47:10.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:47:16.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7192" for this suite. • [SLOW TEST:5.199 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":176,"skipped":3053,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:47:16.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 11 00:47:16.312: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5610 /api/v1/namespaces/watch-5610/configmaps/e2e-watch-test-watch-closed 043d4bcb-015b-4e04-a6b1-c1b38584effb 3222892 0 2020-05-11 00:47:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 00:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:47:16.312: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5610 /api/v1/namespaces/watch-5610/configmaps/e2e-watch-test-watch-closed 043d4bcb-015b-4e04-a6b1-c1b38584effb 3222893 0 2020-05-11 00:47:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 00:47:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 11 00:47:16.337: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5610 /api/v1/namespaces/watch-5610/configmaps/e2e-watch-test-watch-closed 043d4bcb-015b-4e04-a6b1-c1b38584effb 3222894 0 2020-05-11 00:47:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 00:47:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:47:16.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5610 /api/v1/namespaces/watch-5610/configmaps/e2e-watch-test-watch-closed 043d4bcb-015b-4e04-a6b1-c1b38584effb 3222896 0 2020-05-11 00:47:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 00:47:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:47:16.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5610" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":177,"skipped":3054,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:47:16.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4172 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 00:47:16.442: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 00:47:16.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 00:47:18.614: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 00:47:20.510: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:47:22.523: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:47:24.518: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:47:26.512: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:47:28.512: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 00:47:30.512: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 00:47:30.519: INFO: The status of Pod netserver-1 is Running (Ready = false) May 11 00:47:32.522: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 00:47:36.621: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.118 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4172 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:47:36.621: INFO: >>> kubeConfig: /root/.kube/config I0511 00:47:36.652415 7 log.go:172] (0xc001ac6d10) (0xc0025c9ae0) Create stream I0511 00:47:36.652445 7 log.go:172] (0xc001ac6d10) (0xc0025c9ae0) Stream added, broadcasting: 1 I0511 00:47:36.654674 7 log.go:172] (0xc001ac6d10) Reply frame received for 1 I0511 00:47:36.654720 7 log.go:172] (0xc001ac6d10) (0xc0020828c0) Create stream I0511 00:47:36.654736 7 log.go:172] (0xc001ac6d10) (0xc0020828c0) Stream added, broadcasting: 3 I0511 00:47:36.655609 7 log.go:172] (0xc001ac6d10) Reply frame received for 3 I0511 00:47:36.655656 7 log.go:172] (0xc001ac6d10) (0xc0025c9b80) Create stream I0511 00:47:36.655672 7 log.go:172] (0xc001ac6d10) (0xc0025c9b80) Stream added, broadcasting: 5 I0511 00:47:36.656757 7 log.go:172] (0xc001ac6d10) Reply frame received for 5 I0511 00:47:37.714420 7 log.go:172] (0xc001ac6d10) Data frame received for 5 I0511 00:47:37.714462 7 log.go:172] (0xc0025c9b80) (5) Data frame handling I0511 00:47:37.714490 7 log.go:172] (0xc001ac6d10) Data frame received for 3 I0511 00:47:37.714507 7 log.go:172] (0xc0020828c0) (3) Data frame handling I0511 00:47:37.714528 7 log.go:172] (0xc0020828c0) (3) Data frame sent I0511 00:47:37.714546 7 log.go:172] (0xc001ac6d10) Data frame received for 3 I0511 00:47:37.714557 7 log.go:172] (0xc0020828c0) (3) Data frame handling I0511 00:47:37.716414 7 log.go:172] (0xc001ac6d10) Data frame received for 1 I0511 00:47:37.716436 7 log.go:172] (0xc0025c9ae0) (1) Data frame handling I0511 00:47:37.716456 7 log.go:172] (0xc0025c9ae0) (1) Data frame sent I0511 00:47:37.716478 7 log.go:172] (0xc001ac6d10) (0xc0025c9ae0) Stream removed, broadcasting: 1 I0511 00:47:37.716507 7 log.go:172] (0xc001ac6d10) Go away received I0511 00:47:37.716603 7 log.go:172] (0xc001ac6d10) (0xc0025c9ae0) Stream removed, broadcasting: 1 I0511 00:47:37.716639 7 log.go:172] (0xc001ac6d10) (0xc0020828c0) Stream removed, broadcasting: 3 I0511 00:47:37.716668 7 log.go:172] (0xc001ac6d10) (0xc0025c9b80) Stream removed, broadcasting: 5 May 11 00:47:37.716: INFO: Found all expected endpoints: [netserver-0] May 11 00:47:37.720: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.207 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4172 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:47:37.720: INFO: >>> kubeConfig: /root/.kube/config I0511 00:47:37.750557 7 log.go:172] (0xc0029bd970) (0xc002082dc0) Create stream I0511 00:47:37.750594 7 log.go:172] (0xc0029bd970) (0xc002082dc0) Stream added, broadcasting: 1 I0511 00:47:37.752206 7 log.go:172] (0xc0029bd970) Reply frame received for 1 I0511 00:47:37.752236 7 log.go:172] (0xc0029bd970) (0xc0025c9c20) Create stream I0511 00:47:37.752248 7 log.go:172] (0xc0029bd970) (0xc0025c9c20) Stream added, broadcasting: 3 I0511 00:47:37.753363 7 log.go:172] (0xc0029bd970) Reply frame received for 3 I0511 00:47:37.753406 7 log.go:172] (0xc0029bd970) (0xc0029c40a0) Create stream I0511 00:47:37.753424 7 log.go:172] (0xc0029bd970) (0xc0029c40a0) Stream added, broadcasting: 5 I0511 00:47:37.754339 7 log.go:172] (0xc0029bd970) Reply frame received for 5 I0511 00:47:38.807837 7 log.go:172] (0xc0029bd970) Data frame received for 3 I0511 00:47:38.807870 7 log.go:172] (0xc0025c9c20) (3) Data frame handling I0511 00:47:38.807884 7 log.go:172] (0xc0025c9c20) (3) Data frame sent I0511 00:47:38.807902 7 log.go:172] (0xc0029bd970) Data frame received for 3 I0511 00:47:38.807912 7 log.go:172] (0xc0025c9c20) (3) Data frame handling I0511 00:47:38.808018 7 log.go:172] (0xc0029bd970) Data frame received for 5 I0511 00:47:38.808052 7 log.go:172] (0xc0029c40a0) (5) Data frame handling I0511 00:47:38.810098 7 log.go:172] (0xc0029bd970) Data frame received for 1 I0511 00:47:38.810117 7 log.go:172] (0xc002082dc0) (1) Data frame handling I0511 00:47:38.810136 7 log.go:172] (0xc002082dc0) (1) Data frame sent I0511 00:47:38.810148 7 log.go:172] (0xc0029bd970) (0xc002082dc0) Stream removed, broadcasting: 1 I0511 00:47:38.810168 7 log.go:172] (0xc0029bd970) Go away received I0511 00:47:38.810324 7 log.go:172] (0xc0029bd970) (0xc002082dc0) Stream removed, broadcasting: 1 I0511 00:47:38.810350 7 log.go:172] (0xc0029bd970) (0xc0025c9c20) Stream removed, broadcasting: 3 I0511 00:47:38.810362 7 log.go:172] (0xc0029bd970) (0xc0029c40a0) Stream removed, broadcasting: 5 May 11 00:47:38.810: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:47:38.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4172" for this suite. • [SLOW TEST:22.472 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":3067,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:47:38.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-50fa7456-2854-4952-934b-7df58eeab687 STEP: Creating a pod to test consume secrets May 11 00:47:39.098: INFO: Waiting up to 5m0s for pod "pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7" in namespace "secrets-3575" to be "Succeeded or Failed" May 11 00:47:39.103: INFO: Pod "pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.982056ms May 11 00:47:41.254: INFO: Pod "pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156081353s May 11 00:47:43.258: INFO: Pod "pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1597094s STEP: Saw pod success May 11 00:47:43.258: INFO: Pod "pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7" satisfied condition "Succeeded or Failed" May 11 00:47:43.261: INFO: Trying to get logs from node latest-worker pod pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7 container secret-volume-test: STEP: delete the pod May 11 00:47:43.359: INFO: Waiting for pod pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7 to disappear May 11 00:47:43.386: INFO: Pod pod-secrets-c16b14c0-c7b9-4979-8887-f4be487552d7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:47:43.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3575" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":3079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:47:43.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:47:49.603: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.606: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.610: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.612: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.620: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.623: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.625: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.627: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:49.632: INFO: Lookups using dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local] May 11 00:47:54.637: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.641: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.645: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.649: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.659: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.663: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.666: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.669: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:54.676: INFO: Lookups using dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local] May 11 00:47:59.637: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.641: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.644: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.648: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.656: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.659: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.661: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.663: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:47:59.669: INFO: Lookups using dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local] May 11 00:48:04.637: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.640: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.644: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.646: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.655: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.657: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.660: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.663: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:04.668: INFO: Lookups using dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local] May 11 00:48:09.637: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.641: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.644: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.648: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.658: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.661: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.663: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.666: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:09.672: INFO: Lookups using dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local] May 11 00:48:14.636: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.640: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.644: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.648: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.658: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.661: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.664: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.667: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local from pod dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d: the server could not find the requested resource (get pods dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d) May 11 00:48:14.672: INFO: Lookups using dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9368.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local jessie_udp@dns-test-service-2.dns-9368.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9368.svc.cluster.local] May 11 00:48:19.715: INFO: DNS probes using dns-9368/dns-test-99846be0-3b8b-4228-b2d6-1f172bdbc78d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:48:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9368" for this suite. • [SLOW TEST:36.965 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":180,"skipped":3102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:48:20.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:48:20.453: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3658 I0511 00:48:20.474337 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3658, replica count: 1 I0511 00:48:21.524683 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:48:22.524946 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:48:23.525273 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 00:48:24.525523 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 00:48:24.703: INFO: Created: latency-svc-844qc May 11 00:48:24.719: INFO: Got endpoints: latency-svc-844qc [93.656014ms] May 11 00:48:24.780: INFO: Created: latency-svc-7sgz7 May 11 00:48:24.796: INFO: Got endpoints: latency-svc-7sgz7 [76.901273ms] May 11 00:48:24.871: INFO: Created: latency-svc-z65k2 May 11 00:48:24.898: INFO: Got endpoints: latency-svc-z65k2 [179.042241ms] May 11 00:48:24.929: INFO: Created: latency-svc-x7bv7 May 11 00:48:24.940: INFO: Got endpoints: latency-svc-x7bv7 [220.838504ms] May 11 00:48:24.958: INFO: Created: latency-svc-s5jsh May 11 00:48:25.033: INFO: Got endpoints: latency-svc-s5jsh [313.66637ms] May 11 00:48:25.035: INFO: Created: latency-svc-7bgxm May 11 00:48:25.041: INFO: Got endpoints: latency-svc-7bgxm [321.007573ms] May 11 00:48:25.062: INFO: Created: latency-svc-kqswq May 11 00:48:25.078: INFO: Got endpoints: latency-svc-kqswq [358.219248ms] May 11 00:48:25.098: INFO: Created: latency-svc-74wc8 May 11 00:48:25.122: INFO: Got endpoints: latency-svc-74wc8 [402.605783ms] May 11 00:48:25.188: INFO: Created: latency-svc-b9kfd May 11 00:48:25.192: INFO: Got endpoints: latency-svc-b9kfd [472.564233ms] May 11 00:48:25.216: INFO: Created: latency-svc-qn5wj May 11 00:48:25.228: INFO: Got endpoints: latency-svc-qn5wj [509.171262ms] May 11 00:48:25.249: INFO: Created: latency-svc-zvvjf May 11 00:48:25.259: INFO: Got endpoints: latency-svc-zvvjf [539.200624ms] May 11 00:48:25.326: INFO: Created: latency-svc-gkr4k May 11 00:48:25.362: INFO: Created: latency-svc-p7x5j May 11 00:48:25.363: INFO: Got endpoints: latency-svc-gkr4k [642.972997ms] May 11 00:48:25.385: INFO: Got endpoints: latency-svc-p7x5j [665.738773ms] May 11 00:48:25.408: INFO: Created: latency-svc-7x2tw May 11 00:48:25.470: INFO: Got endpoints: latency-svc-7x2tw [750.343851ms] May 11 00:48:25.480: INFO: Created: latency-svc-b8kn9 May 11 00:48:25.494: INFO: Got endpoints: latency-svc-b8kn9 [774.090254ms] May 11 00:48:25.544: INFO: Created: latency-svc-7t6st May 11 00:48:25.566: INFO: Got endpoints: latency-svc-7t6st [847.097194ms] May 11 00:48:25.620: INFO: Created: latency-svc-zpk6d May 11 00:48:25.639: INFO: Got endpoints: latency-svc-zpk6d [842.659714ms] May 11 00:48:25.684: INFO: Created: latency-svc-l5nj4 May 11 00:48:25.739: INFO: Got endpoints: latency-svc-l5nj4 [840.990535ms] May 11 00:48:25.831: INFO: Created: latency-svc-9n97d May 11 00:48:25.962: INFO: Got endpoints: latency-svc-9n97d [1.021632385s] May 11 00:48:25.998: INFO: Created: latency-svc-vtlgs May 11 00:48:26.037: INFO: Got endpoints: latency-svc-vtlgs [1.004219114s] May 11 00:48:26.116: INFO: Created: latency-svc-p4cdk May 11 00:48:26.125: INFO: Got endpoints: latency-svc-p4cdk [1.084333751s] May 11 00:48:26.146: INFO: Created: latency-svc-ndq6q May 11 00:48:26.180: INFO: Got endpoints: latency-svc-ndq6q [1.102402024s] May 11 00:48:26.267: INFO: Created: latency-svc-xk2sr May 11 00:48:26.292: INFO: Got endpoints: latency-svc-xk2sr [1.169964926s] May 11 00:48:26.321: INFO: Created: latency-svc-9zc4g May 11 00:48:26.350: INFO: Got endpoints: latency-svc-9zc4g [1.157605414s] May 11 00:48:26.416: INFO: Created: latency-svc-ppfvm May 11 00:48:26.430: INFO: Got endpoints: latency-svc-ppfvm [1.201788296s] May 11 00:48:26.466: INFO: Created: latency-svc-mbjb6 May 11 00:48:26.475: INFO: Got endpoints: latency-svc-mbjb6 [1.216059317s] May 11 00:48:26.502: INFO: Created: latency-svc-ls4m8 May 11 00:48:26.511: INFO: Got endpoints: latency-svc-ls4m8 [1.148085825s] May 11 00:48:26.584: INFO: Created: latency-svc-cskg8 May 11 00:48:26.601: INFO: Got endpoints: latency-svc-cskg8 [1.215661802s] May 11 00:48:26.654: INFO: Created: latency-svc-4fl9g May 11 00:48:26.707: INFO: Got endpoints: latency-svc-4fl9g [1.237471535s] May 11 00:48:26.730: INFO: Created: latency-svc-k4v4j May 11 00:48:26.776: INFO: Got endpoints: latency-svc-k4v4j [1.282052931s] May 11 00:48:26.841: INFO: Created: latency-svc-ksckr May 11 00:48:26.853: INFO: Got endpoints: latency-svc-ksckr [1.2868847s] May 11 00:48:26.904: INFO: Created: latency-svc-lwbvb May 11 00:48:26.926: INFO: Got endpoints: latency-svc-lwbvb [1.287536958s] May 11 00:48:26.997: INFO: Created: latency-svc-vzvkb May 11 00:48:27.022: INFO: Got endpoints: latency-svc-vzvkb [1.282470214s] May 11 00:48:27.022: INFO: Created: latency-svc-2q6jm May 11 00:48:27.046: INFO: Got endpoints: latency-svc-2q6jm [1.084086756s] May 11 00:48:27.095: INFO: Created: latency-svc-x8n7n May 11 00:48:27.153: INFO: Got endpoints: latency-svc-x8n7n [1.115966055s] May 11 00:48:27.168: INFO: Created: latency-svc-s54h5 May 11 00:48:27.179: INFO: Got endpoints: latency-svc-s54h5 [1.053458989s] May 11 00:48:27.198: INFO: Created: latency-svc-wzcfh May 11 00:48:27.210: INFO: Got endpoints: latency-svc-wzcfh [1.029559478s] May 11 00:48:27.232: INFO: Created: latency-svc-48jbb May 11 00:48:27.245: INFO: Got endpoints: latency-svc-48jbb [953.174837ms] May 11 00:48:27.298: INFO: Created: latency-svc-c4c5c May 11 00:48:27.312: INFO: Got endpoints: latency-svc-c4c5c [962.00418ms] May 11 00:48:27.371: INFO: Created: latency-svc-98fbf May 11 00:48:27.451: INFO: Got endpoints: latency-svc-98fbf [1.021384943s] May 11 00:48:27.453: INFO: Created: latency-svc-8vqzn May 11 00:48:27.484: INFO: Got endpoints: latency-svc-8vqzn [1.009048863s] May 11 00:48:27.515: INFO: Created: latency-svc-bt5z5 May 11 00:48:27.529: INFO: Got endpoints: latency-svc-bt5z5 [1.018480055s] May 11 00:48:27.607: INFO: Created: latency-svc-sv6ng May 11 00:48:27.613: INFO: Got endpoints: latency-svc-sv6ng [1.012243111s] May 11 00:48:27.636: INFO: Created: latency-svc-2xz8h May 11 00:48:27.649: INFO: Got endpoints: latency-svc-2xz8h [941.576947ms] May 11 00:48:27.700: INFO: Created: latency-svc-rkx5g May 11 00:48:27.770: INFO: Got endpoints: latency-svc-rkx5g [994.259777ms] May 11 00:48:27.810: INFO: Created: latency-svc-t7m48 May 11 00:48:27.854: INFO: Got endpoints: latency-svc-t7m48 [1.000164107s] May 11 00:48:27.925: INFO: Created: latency-svc-6bwxr May 11 00:48:27.975: INFO: Got endpoints: latency-svc-6bwxr [1.049136191s] May 11 00:48:28.020: INFO: Created: latency-svc-22qwk May 11 00:48:28.082: INFO: Got endpoints: latency-svc-22qwk [1.060119666s] May 11 00:48:28.120: INFO: Created: latency-svc-cjhvj May 11 00:48:28.136: INFO: Got endpoints: latency-svc-cjhvj [1.090090988s] May 11 00:48:28.232: INFO: Created: latency-svc-hfwzt May 11 00:48:28.266: INFO: Got endpoints: latency-svc-hfwzt [1.113013904s] May 11 00:48:28.296: INFO: Created: latency-svc-6q72j May 11 00:48:28.304: INFO: Got endpoints: latency-svc-6q72j [1.125663524s] May 11 00:48:28.325: INFO: Created: latency-svc-852nm May 11 00:48:28.386: INFO: Got endpoints: latency-svc-852nm [1.176426278s] May 11 00:48:28.426: INFO: Created: latency-svc-kvsnl May 11 00:48:28.437: INFO: Got endpoints: latency-svc-kvsnl [1.191717584s] May 11 00:48:28.463: INFO: Created: latency-svc-ntgsz May 11 00:48:28.480: INFO: Got endpoints: latency-svc-ntgsz [1.167516042s] May 11 00:48:28.554: INFO: Created: latency-svc-58q56 May 11 00:48:28.564: INFO: Got endpoints: latency-svc-58q56 [1.112682179s] May 11 00:48:28.612: INFO: Created: latency-svc-lmqgb May 11 00:48:28.624: INFO: Got endpoints: latency-svc-lmqgb [1.140670699s] May 11 00:48:28.716: INFO: Created: latency-svc-x49jv May 11 00:48:28.769: INFO: Got endpoints: latency-svc-x49jv [1.240239327s] May 11 00:48:28.854: INFO: Created: latency-svc-gh6jk May 11 00:48:28.858: INFO: Got endpoints: latency-svc-gh6jk [1.244478596s] May 11 00:48:28.923: INFO: Created: latency-svc-98xqx May 11 00:48:28.937: INFO: Got endpoints: latency-svc-98xqx [1.287914406s] May 11 00:48:29.006: INFO: Created: latency-svc-fn5wk May 11 00:48:29.009: INFO: Got endpoints: latency-svc-fn5wk [1.238768886s] May 11 00:48:29.069: INFO: Created: latency-svc-44n7c May 11 00:48:29.081: INFO: Got endpoints: latency-svc-44n7c [1.227323102s] May 11 00:48:29.171: INFO: Created: latency-svc-wv8b5 May 11 00:48:29.200: INFO: Got endpoints: latency-svc-wv8b5 [1.224064402s] May 11 00:48:29.200: INFO: Created: latency-svc-8gcxl May 11 00:48:29.229: INFO: Got endpoints: latency-svc-8gcxl [1.147086276s] May 11 00:48:29.255: INFO: Created: latency-svc-vfsjd May 11 00:48:29.269: INFO: Got endpoints: latency-svc-vfsjd [1.133063895s] May 11 00:48:29.355: INFO: Created: latency-svc-xvqz2 May 11 00:48:29.392: INFO: Got endpoints: latency-svc-xvqz2 [1.125482996s] May 11 00:48:29.470: INFO: Created: latency-svc-pn8dc May 11 00:48:29.478: INFO: Got endpoints: latency-svc-pn8dc [1.173739805s] May 11 00:48:29.501: INFO: Created: latency-svc-tsqhr May 11 00:48:29.515: INFO: Got endpoints: latency-svc-tsqhr [1.128381181s] May 11 00:48:29.543: INFO: Created: latency-svc-xvmz4 May 11 00:48:29.563: INFO: Got endpoints: latency-svc-xvmz4 [1.12616247s] May 11 00:48:29.625: INFO: Created: latency-svc-2pdf6 May 11 00:48:29.638: INFO: Got endpoints: latency-svc-2pdf6 [1.15820686s] May 11 00:48:29.687: INFO: Created: latency-svc-kvx9j May 11 00:48:29.711: INFO: Got endpoints: latency-svc-kvx9j [1.147224941s] May 11 00:48:29.770: INFO: Created: latency-svc-c8blh May 11 00:48:29.776: INFO: Got endpoints: latency-svc-c8blh [1.1517781s] May 11 00:48:29.799: INFO: Created: latency-svc-prknl May 11 00:48:29.816: INFO: Got endpoints: latency-svc-prknl [1.046583063s] May 11 00:48:29.835: INFO: Created: latency-svc-rhbb7 May 11 00:48:29.847: INFO: Got endpoints: latency-svc-rhbb7 [989.40271ms] May 11 00:48:29.943: INFO: Created: latency-svc-xwdm4 May 11 00:48:29.949: INFO: Got endpoints: latency-svc-xwdm4 [1.011455247s] May 11 00:48:29.981: INFO: Created: latency-svc-h2qf7 May 11 00:48:30.010: INFO: Got endpoints: latency-svc-h2qf7 [1.000812839s] May 11 00:48:30.030: INFO: Created: latency-svc-wtqmw May 11 00:48:30.104: INFO: Got endpoints: latency-svc-wtqmw [1.023318813s] May 11 00:48:30.106: INFO: Created: latency-svc-njr9z May 11 00:48:30.123: INFO: Got endpoints: latency-svc-njr9z [923.315456ms] May 11 00:48:30.154: INFO: Created: latency-svc-n2xlh May 11 00:48:30.184: INFO: Got endpoints: latency-svc-n2xlh [954.857097ms] May 11 00:48:30.260: INFO: Created: latency-svc-6grpv May 11 00:48:30.274: INFO: Got endpoints: latency-svc-6grpv [1.004764412s] May 11 00:48:30.297: INFO: Created: latency-svc-9hcd2 May 11 00:48:30.311: INFO: Got endpoints: latency-svc-9hcd2 [918.484855ms] May 11 00:48:30.357: INFO: Created: latency-svc-vcx4r May 11 00:48:30.416: INFO: Got endpoints: latency-svc-vcx4r [937.482165ms] May 11 00:48:30.437: INFO: Created: latency-svc-mch74 May 11 00:48:30.459: INFO: Got endpoints: latency-svc-mch74 [944.493026ms] May 11 00:48:30.489: INFO: Created: latency-svc-8z8cs May 11 00:48:30.505: INFO: Got endpoints: latency-svc-8z8cs [941.809918ms] May 11 00:48:30.564: INFO: Created: latency-svc-z7c29 May 11 00:48:30.569: INFO: Got endpoints: latency-svc-z7c29 [931.249633ms] May 11 00:48:30.611: INFO: Created: latency-svc-xtlpk May 11 00:48:30.625: INFO: Got endpoints: latency-svc-xtlpk [913.569332ms] May 11 00:48:30.659: INFO: Created: latency-svc-ftwf7 May 11 00:48:30.734: INFO: Got endpoints: latency-svc-ftwf7 [957.577295ms] May 11 00:48:30.736: INFO: Created: latency-svc-6c4np May 11 00:48:30.744: INFO: Got endpoints: latency-svc-6c4np [928.243272ms] May 11 00:48:30.780: INFO: Created: latency-svc-qjj4h May 11 00:48:30.805: INFO: Got endpoints: latency-svc-qjj4h [957.720153ms] May 11 00:48:30.827: INFO: Created: latency-svc-2rxrs May 11 00:48:30.895: INFO: Got endpoints: latency-svc-2rxrs [946.135656ms] May 11 00:48:30.923: INFO: Created: latency-svc-gl89b May 11 00:48:30.945: INFO: Got endpoints: latency-svc-gl89b [935.552358ms] May 11 00:48:30.975: INFO: Created: latency-svc-wdp6w May 11 00:48:30.987: INFO: Got endpoints: latency-svc-wdp6w [882.794578ms] May 11 00:48:31.033: INFO: Created: latency-svc-lszkj May 11 00:48:31.060: INFO: Created: latency-svc-6hlv9 May 11 00:48:31.060: INFO: Got endpoints: latency-svc-lszkj [936.667239ms] May 11 00:48:31.091: INFO: Got endpoints: latency-svc-6hlv9 [906.720655ms] May 11 00:48:31.121: INFO: Created: latency-svc-5nb4g May 11 00:48:31.200: INFO: Got endpoints: latency-svc-5nb4g [926.3912ms] May 11 00:48:31.203: INFO: Created: latency-svc-z9q5w May 11 00:48:31.211: INFO: Got endpoints: latency-svc-z9q5w [900.5563ms] May 11 00:48:31.227: INFO: Created: latency-svc-xfphv May 11 00:48:31.251: INFO: Got endpoints: latency-svc-xfphv [835.796509ms] May 11 00:48:31.282: INFO: Created: latency-svc-648gq May 11 00:48:31.398: INFO: Got endpoints: latency-svc-648gq [938.806351ms] May 11 00:48:31.406: INFO: Created: latency-svc-rl2gj May 11 00:48:31.415: INFO: Got endpoints: latency-svc-rl2gj [909.98711ms] May 11 00:48:31.438: INFO: Created: latency-svc-mqjpw May 11 00:48:31.446: INFO: Got endpoints: latency-svc-mqjpw [876.395602ms] May 11 00:48:31.467: INFO: Created: latency-svc-2s8nh May 11 00:48:31.487: INFO: Got endpoints: latency-svc-2s8nh [862.341567ms] May 11 00:48:31.566: INFO: Created: latency-svc-bsqgq May 11 00:48:31.573: INFO: Got endpoints: latency-svc-bsqgq [838.654951ms] May 11 00:48:31.599: INFO: Created: latency-svc-bnz4t May 11 00:48:31.615: INFO: Got endpoints: latency-svc-bnz4t [870.293311ms] May 11 00:48:31.647: INFO: Created: latency-svc-s4qcr May 11 00:48:31.657: INFO: Got endpoints: latency-svc-s4qcr [852.430837ms] May 11 00:48:31.712: INFO: Created: latency-svc-gsp9t May 11 00:48:31.739: INFO: Got endpoints: latency-svc-gsp9t [844.4532ms] May 11 00:48:31.740: INFO: Created: latency-svc-tf5sr May 11 00:48:31.757: INFO: Got endpoints: latency-svc-tf5sr [811.344857ms] May 11 00:48:31.775: INFO: Created: latency-svc-xfg7g May 11 00:48:31.790: INFO: Got endpoints: latency-svc-xfg7g [802.910064ms] May 11 00:48:31.902: INFO: Created: latency-svc-dxjq2 May 11 00:48:31.916: INFO: Got endpoints: latency-svc-dxjq2 [856.347399ms] May 11 00:48:31.950: INFO: Created: latency-svc-jf5d2 May 11 00:48:31.988: INFO: Got endpoints: latency-svc-jf5d2 [897.054771ms] May 11 00:48:32.085: INFO: Created: latency-svc-v7hvh May 11 00:48:32.103: INFO: Got endpoints: latency-svc-v7hvh [902.469129ms] May 11 00:48:32.182: INFO: Created: latency-svc-9gv95 May 11 00:48:32.213: INFO: Created: latency-svc-9vbxx May 11 00:48:32.213: INFO: Got endpoints: latency-svc-9gv95 [1.002016304s] May 11 00:48:32.235: INFO: Got endpoints: latency-svc-9vbxx [983.422167ms] May 11 00:48:32.270: INFO: Created: latency-svc-bz2xc May 11 00:48:32.308: INFO: Got endpoints: latency-svc-bz2xc [910.354366ms] May 11 00:48:32.319: INFO: Created: latency-svc-76s5p May 11 00:48:32.349: INFO: Got endpoints: latency-svc-76s5p [934.001439ms] May 11 00:48:32.381: INFO: Created: latency-svc-44pl7 May 11 00:48:32.398: INFO: Got endpoints: latency-svc-44pl7 [952.837832ms] May 11 00:48:32.449: INFO: Created: latency-svc-z7s8x May 11 00:48:32.450: INFO: Got endpoints: latency-svc-z7s8x [962.56994ms] May 11 00:48:32.495: INFO: Created: latency-svc-kwsxg May 11 00:48:32.506: INFO: Got endpoints: latency-svc-kwsxg [933.629143ms] May 11 00:48:32.523: INFO: Created: latency-svc-qf7j6 May 11 00:48:32.615: INFO: Got endpoints: latency-svc-qf7j6 [999.864601ms] May 11 00:48:32.627: INFO: Created: latency-svc-z786j May 11 00:48:32.639: INFO: Got endpoints: latency-svc-z786j [981.382889ms] May 11 00:48:32.657: INFO: Created: latency-svc-t4pds May 11 00:48:32.670: INFO: Got endpoints: latency-svc-t4pds [930.352745ms] May 11 00:48:32.687: INFO: Created: latency-svc-npffb May 11 00:48:32.700: INFO: Got endpoints: latency-svc-npffb [942.89994ms] May 11 00:48:32.753: INFO: Created: latency-svc-p6p72 May 11 00:48:32.760: INFO: Got endpoints: latency-svc-p6p72 [969.471111ms] May 11 00:48:32.781: INFO: Created: latency-svc-glqsp May 11 00:48:32.796: INFO: Got endpoints: latency-svc-glqsp [879.969899ms] May 11 00:48:32.825: INFO: Created: latency-svc-9n4m4 May 11 00:48:32.851: INFO: Got endpoints: latency-svc-9n4m4 [862.889852ms] May 11 00:48:32.931: INFO: Created: latency-svc-skvr9 May 11 00:48:32.953: INFO: Got endpoints: latency-svc-skvr9 [850.020976ms] May 11 00:48:32.991: INFO: Created: latency-svc-kbpdb May 11 00:48:33.007: INFO: Got endpoints: latency-svc-kbpdb [794.031699ms] May 11 00:48:33.027: INFO: Created: latency-svc-r2pfr May 11 00:48:33.062: INFO: Got endpoints: latency-svc-r2pfr [827.1498ms] May 11 00:48:33.076: INFO: Created: latency-svc-vqw5j May 11 00:48:33.091: INFO: Got endpoints: latency-svc-vqw5j [782.99185ms] May 11 00:48:33.113: INFO: Created: latency-svc-nrwm2 May 11 00:48:33.128: INFO: Got endpoints: latency-svc-nrwm2 [778.921747ms] May 11 00:48:33.148: INFO: Created: latency-svc-cqlfr May 11 00:48:33.212: INFO: Got endpoints: latency-svc-cqlfr [813.904453ms] May 11 00:48:33.231: INFO: Created: latency-svc-v4gkb May 11 00:48:33.243: INFO: Got endpoints: latency-svc-v4gkb [793.123146ms] May 11 00:48:33.267: INFO: Created: latency-svc-gtkjw May 11 00:48:33.279: INFO: Got endpoints: latency-svc-gtkjw [772.880561ms] May 11 00:48:33.311: INFO: Created: latency-svc-7n5bb May 11 00:48:33.368: INFO: Got endpoints: latency-svc-7n5bb [753.39765ms] May 11 00:48:33.413: INFO: Created: latency-svc-c2fg2 May 11 00:48:33.441: INFO: Got endpoints: latency-svc-c2fg2 [802.457812ms] May 11 00:48:33.518: INFO: Created: latency-svc-np9fj May 11 00:48:33.545: INFO: Got endpoints: latency-svc-np9fj [875.22364ms] May 11 00:48:33.545: INFO: Created: latency-svc-sqsks May 11 00:48:33.581: INFO: Got endpoints: latency-svc-sqsks [881.336881ms] May 11 00:48:33.617: INFO: Created: latency-svc-kk44p May 11 00:48:33.655: INFO: Got endpoints: latency-svc-kk44p [895.291109ms] May 11 00:48:33.669: INFO: Created: latency-svc-h258x May 11 00:48:33.683: INFO: Got endpoints: latency-svc-h258x [886.569708ms] May 11 00:48:33.705: INFO: Created: latency-svc-rfx4j May 11 00:48:33.719: INFO: Got endpoints: latency-svc-rfx4j [867.839786ms] May 11 00:48:33.741: INFO: Created: latency-svc-2l9kv May 11 00:48:33.835: INFO: Got endpoints: latency-svc-2l9kv [882.031679ms] May 11 00:48:33.844: INFO: Created: latency-svc-rpb55 May 11 00:48:33.858: INFO: Got endpoints: latency-svc-rpb55 [850.282467ms] May 11 00:48:33.903: INFO: Created: latency-svc-9bx7b May 11 00:48:33.912: INFO: Got endpoints: latency-svc-9bx7b [850.115552ms] May 11 00:48:33.933: INFO: Created: latency-svc-q4kkr May 11 00:48:34.020: INFO: Got endpoints: latency-svc-q4kkr [928.941426ms] May 11 00:48:34.023: INFO: Created: latency-svc-c26gh May 11 00:48:34.053: INFO: Got endpoints: latency-svc-c26gh [924.251611ms] May 11 00:48:34.091: INFO: Created: latency-svc-627fx May 11 00:48:34.105: INFO: Got endpoints: latency-svc-627fx [892.141679ms] May 11 00:48:34.170: INFO: Created: latency-svc-xc226 May 11 00:48:34.211: INFO: Got endpoints: latency-svc-xc226 [967.778446ms] May 11 00:48:34.257: INFO: Created: latency-svc-ps7ng May 11 00:48:34.308: INFO: Got endpoints: latency-svc-ps7ng [1.02858433s] May 11 00:48:34.316: INFO: Created: latency-svc-z2wbk May 11 00:48:34.348: INFO: Got endpoints: latency-svc-z2wbk [980.300943ms] May 11 00:48:34.379: INFO: Created: latency-svc-8xq4p May 11 00:48:34.394: INFO: Got endpoints: latency-svc-8xq4p [952.601493ms] May 11 00:48:34.458: INFO: Created: latency-svc-vdcbw May 11 00:48:34.461: INFO: Got endpoints: latency-svc-vdcbw [915.579333ms] May 11 00:48:34.497: INFO: Created: latency-svc-6nqff May 11 00:48:34.510: INFO: Got endpoints: latency-svc-6nqff [928.577235ms] May 11 00:48:34.533: INFO: Created: latency-svc-7gvl9 May 11 00:48:34.601: INFO: Got endpoints: latency-svc-7gvl9 [946.366549ms] May 11 00:48:34.618: INFO: Created: latency-svc-4tpdx May 11 00:48:34.636: INFO: Got endpoints: latency-svc-4tpdx [952.749938ms] May 11 00:48:34.661: INFO: Created: latency-svc-2xjq8 May 11 00:48:34.678: INFO: Got endpoints: latency-svc-2xjq8 [959.34957ms] May 11 00:48:34.696: INFO: Created: latency-svc-p6rjk May 11 00:48:34.751: INFO: Got endpoints: latency-svc-p6rjk [916.191092ms] May 11 00:48:34.773: INFO: Created: latency-svc-2dnqr May 11 00:48:34.787: INFO: Got endpoints: latency-svc-2dnqr [929.086408ms] May 11 00:48:34.804: INFO: Created: latency-svc-m4v84 May 11 00:48:34.816: INFO: Got endpoints: latency-svc-m4v84 [904.151964ms] May 11 00:48:34.849: INFO: Created: latency-svc-4rph6 May 11 00:48:34.925: INFO: Got endpoints: latency-svc-4rph6 [904.695387ms] May 11 00:48:34.948: INFO: Created: latency-svc-ql4vq May 11 00:48:34.973: INFO: Got endpoints: latency-svc-ql4vq [920.678362ms] May 11 00:48:35.009: INFO: Created: latency-svc-xwfpj May 11 00:48:35.022: INFO: Got endpoints: latency-svc-xwfpj [917.156723ms] May 11 00:48:35.087: INFO: Created: latency-svc-dz42l May 11 00:48:35.094: INFO: Got endpoints: latency-svc-dz42l [883.12787ms] May 11 00:48:35.114: INFO: Created: latency-svc-hktqz May 11 00:48:35.144: INFO: Got endpoints: latency-svc-hktqz [836.394549ms] May 11 00:48:35.168: INFO: Created: latency-svc-pzfjx May 11 00:48:35.182: INFO: Got endpoints: latency-svc-pzfjx [833.487877ms] May 11 00:48:35.231: INFO: Created: latency-svc-gt57s May 11 00:48:35.236: INFO: Got endpoints: latency-svc-gt57s [841.831595ms] May 11 00:48:35.261: INFO: Created: latency-svc-lx6v6 May 11 00:48:35.273: INFO: Got endpoints: latency-svc-lx6v6 [811.765821ms] May 11 00:48:35.296: INFO: Created: latency-svc-jvjpw May 11 00:48:35.398: INFO: Got endpoints: latency-svc-jvjpw [888.581045ms] May 11 00:48:35.400: INFO: Created: latency-svc-7bg52 May 11 00:48:35.404: INFO: Got endpoints: latency-svc-7bg52 [803.048946ms] May 11 00:48:35.432: INFO: Created: latency-svc-mz9xq May 11 00:48:35.447: INFO: Got endpoints: latency-svc-mz9xq [811.462589ms] May 11 00:48:35.464: INFO: Created: latency-svc-xhkjp May 11 00:48:35.477: INFO: Got endpoints: latency-svc-xhkjp [798.685634ms] May 11 00:48:35.547: INFO: Created: latency-svc-dv9tn May 11 00:48:35.558: INFO: Got endpoints: latency-svc-dv9tn [806.636562ms] May 11 00:48:35.589: INFO: Created: latency-svc-j4b8v May 11 00:48:35.605: INFO: Got endpoints: latency-svc-j4b8v [818.468641ms] May 11 00:48:35.692: INFO: Created: latency-svc-hn6lk May 11 00:48:35.694: INFO: Got endpoints: latency-svc-hn6lk [877.42629ms] May 11 00:48:35.722: INFO: Created: latency-svc-zt9wh May 11 00:48:35.736: INFO: Got endpoints: latency-svc-zt9wh [811.251001ms] May 11 00:48:35.760: INFO: Created: latency-svc-r269w May 11 00:48:35.767: INFO: Got endpoints: latency-svc-r269w [793.277971ms] May 11 00:48:35.786: INFO: Created: latency-svc-8w24d May 11 00:48:35.835: INFO: Got endpoints: latency-svc-8w24d [813.022448ms] May 11 00:48:35.877: INFO: Created: latency-svc-xk4tp May 11 00:48:35.915: INFO: Got endpoints: latency-svc-xk4tp [820.821636ms] May 11 00:48:36.021: INFO: Created: latency-svc-qnp5r May 11 00:48:36.111: INFO: Got endpoints: latency-svc-qnp5r [966.330823ms] May 11 00:48:36.111: INFO: Created: latency-svc-6qsmc May 11 00:48:36.188: INFO: Got endpoints: latency-svc-6qsmc [1.00636114s] May 11 00:48:36.226: INFO: Created: latency-svc-pjfck May 11 00:48:36.272: INFO: Got endpoints: latency-svc-pjfck [1.036243421s] May 11 00:48:36.379: INFO: Created: latency-svc-7wrhj May 11 00:48:36.393: INFO: Got endpoints: latency-svc-7wrhj [1.119975884s] May 11 00:48:36.411: INFO: Created: latency-svc-cfzfr May 11 00:48:36.423: INFO: Got endpoints: latency-svc-cfzfr [1.024434499s] May 11 00:48:36.441: INFO: Created: latency-svc-cnln6 May 11 00:48:36.541: INFO: Got endpoints: latency-svc-cnln6 [1.136799564s] May 11 00:48:36.544: INFO: Created: latency-svc-dxw7z May 11 00:48:36.555: INFO: Got endpoints: latency-svc-dxw7z [1.108031363s] May 11 00:48:36.597: INFO: Created: latency-svc-f2xqb May 11 00:48:36.622: INFO: Got endpoints: latency-svc-f2xqb [1.144573988s] May 11 00:48:36.667: INFO: Created: latency-svc-7rkd2 May 11 00:48:36.695: INFO: Got endpoints: latency-svc-7rkd2 [1.137277132s] May 11 00:48:36.697: INFO: Created: latency-svc-gv2nt May 11 00:48:36.729: INFO: Got endpoints: latency-svc-gv2nt [1.124177856s] May 11 00:48:36.829: INFO: Created: latency-svc-q2b8n May 11 00:48:36.845: INFO: Got endpoints: latency-svc-q2b8n [1.151232153s] May 11 00:48:36.880: INFO: Created: latency-svc-kfsl8 May 11 00:48:36.916: INFO: Got endpoints: latency-svc-kfsl8 [1.179727102s] May 11 00:48:36.991: INFO: Created: latency-svc-982ww May 11 00:48:37.007: INFO: Got endpoints: latency-svc-982ww [1.240383002s] May 11 00:48:37.031: INFO: Created: latency-svc-lb5fh May 11 00:48:37.043: INFO: Got endpoints: latency-svc-lb5fh [1.20806242s] May 11 00:48:37.077: INFO: Created: latency-svc-vm5b7 May 11 00:48:37.128: INFO: Got endpoints: latency-svc-vm5b7 [1.213005893s] May 11 00:48:37.145: INFO: Created: latency-svc-sspg9 May 11 00:48:37.157: INFO: Got endpoints: latency-svc-sspg9 [1.046519932s] May 11 00:48:37.182: INFO: Created: latency-svc-2x2qk May 11 00:48:37.194: INFO: Got endpoints: latency-svc-2x2qk [1.005909452s] May 11 00:48:37.218: INFO: Created: latency-svc-nrlt6 May 11 00:48:37.260: INFO: Got endpoints: latency-svc-nrlt6 [987.950141ms] May 11 00:48:37.275: INFO: Created: latency-svc-kjn64 May 11 00:48:37.291: INFO: Got endpoints: latency-svc-kjn64 [897.703835ms] May 11 00:48:37.311: INFO: Created: latency-svc-4cts7 May 11 00:48:37.342: INFO: Got endpoints: latency-svc-4cts7 [918.585904ms] May 11 00:48:37.410: INFO: Created: latency-svc-7578r May 11 00:48:37.423: INFO: Got endpoints: latency-svc-7578r [881.882028ms] May 11 00:48:37.451: INFO: Created: latency-svc-qxsdt May 11 00:48:37.459: INFO: Got endpoints: latency-svc-qxsdt [904.314613ms] May 11 00:48:37.479: INFO: Created: latency-svc-gqdm6 May 11 00:48:37.496: INFO: Got endpoints: latency-svc-gqdm6 [874.034467ms] May 11 00:48:37.579: INFO: Created: latency-svc-jzd8t May 11 00:48:37.586: INFO: Got endpoints: latency-svc-jzd8t [890.820304ms] May 11 00:48:37.607: INFO: Created: latency-svc-spjbf May 11 00:48:37.631: INFO: Got endpoints: latency-svc-spjbf [901.413637ms] May 11 00:48:37.656: INFO: Created: latency-svc-xsx6z May 11 00:48:37.670: INFO: Got endpoints: latency-svc-xsx6z [825.083991ms] May 11 00:48:37.670: INFO: Latencies: [76.901273ms 179.042241ms 220.838504ms 313.66637ms 321.007573ms 358.219248ms 402.605783ms 472.564233ms 509.171262ms 539.200624ms 642.972997ms 665.738773ms 750.343851ms 753.39765ms 772.880561ms 774.090254ms 778.921747ms 782.99185ms 793.123146ms 793.277971ms 794.031699ms 798.685634ms 802.457812ms 802.910064ms 803.048946ms 806.636562ms 811.251001ms 811.344857ms 811.462589ms 811.765821ms 813.022448ms 813.904453ms 818.468641ms 820.821636ms 825.083991ms 827.1498ms 833.487877ms 835.796509ms 836.394549ms 838.654951ms 840.990535ms 841.831595ms 842.659714ms 844.4532ms 847.097194ms 850.020976ms 850.115552ms 850.282467ms 852.430837ms 856.347399ms 862.341567ms 862.889852ms 867.839786ms 870.293311ms 874.034467ms 875.22364ms 876.395602ms 877.42629ms 879.969899ms 881.336881ms 881.882028ms 882.031679ms 882.794578ms 883.12787ms 886.569708ms 888.581045ms 890.820304ms 892.141679ms 895.291109ms 897.054771ms 897.703835ms 900.5563ms 901.413637ms 902.469129ms 904.151964ms 904.314613ms 904.695387ms 906.720655ms 909.98711ms 910.354366ms 913.569332ms 915.579333ms 916.191092ms 917.156723ms 918.484855ms 918.585904ms 920.678362ms 923.315456ms 924.251611ms 926.3912ms 928.243272ms 928.577235ms 928.941426ms 929.086408ms 930.352745ms 931.249633ms 933.629143ms 934.001439ms 935.552358ms 936.667239ms 937.482165ms 938.806351ms 941.576947ms 941.809918ms 942.89994ms 944.493026ms 946.135656ms 946.366549ms 952.601493ms 952.749938ms 952.837832ms 953.174837ms 954.857097ms 957.577295ms 957.720153ms 959.34957ms 962.00418ms 962.56994ms 966.330823ms 967.778446ms 969.471111ms 980.300943ms 981.382889ms 983.422167ms 987.950141ms 989.40271ms 994.259777ms 999.864601ms 1.000164107s 1.000812839s 1.002016304s 1.004219114s 1.004764412s 1.005909452s 1.00636114s 1.009048863s 1.011455247s 1.012243111s 1.018480055s 1.021384943s 1.021632385s 1.023318813s 1.024434499s 1.02858433s 1.029559478s 1.036243421s 1.046519932s 1.046583063s 1.049136191s 1.053458989s 1.060119666s 1.084086756s 1.084333751s 1.090090988s 1.102402024s 1.108031363s 1.112682179s 1.113013904s 1.115966055s 1.119975884s 1.124177856s 1.125482996s 1.125663524s 1.12616247s 1.128381181s 1.133063895s 1.136799564s 1.137277132s 1.140670699s 1.144573988s 1.147086276s 1.147224941s 1.148085825s 1.151232153s 1.1517781s 1.157605414s 1.15820686s 1.167516042s 1.169964926s 1.173739805s 1.176426278s 1.179727102s 1.191717584s 1.201788296s 1.20806242s 1.213005893s 1.215661802s 1.216059317s 1.224064402s 1.227323102s 1.237471535s 1.238768886s 1.240239327s 1.240383002s 1.244478596s 1.282052931s 1.282470214s 1.2868847s 1.287536958s 1.287914406s] May 11 00:48:37.671: INFO: 50 %ile: 937.482165ms May 11 00:48:37.671: INFO: 90 %ile: 1.176426278s May 11 00:48:37.671: INFO: 99 %ile: 1.287536958s May 11 00:48:37.671: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:48:37.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3658" for this suite. • [SLOW TEST:17.444 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":181,"skipped":3129,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:48:37.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:48:37.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304" in namespace "projected-5968" to be "Succeeded or Failed" May 11 00:48:38.001: INFO: Pod "downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304": Phase="Pending", Reason="", readiness=false. Elapsed: 32.502901ms May 11 00:48:40.014: INFO: Pod "downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045998099s May 11 00:48:42.019: INFO: Pod "downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050282511s STEP: Saw pod success May 11 00:48:42.019: INFO: Pod "downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304" satisfied condition "Succeeded or Failed" May 11 00:48:42.022: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304 container client-container: STEP: delete the pod May 11 00:48:42.166: INFO: Waiting for pod downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304 to disappear May 11 00:48:42.188: INFO: Pod downwardapi-volume-76edc72a-5edb-4b8d-8e23-481cfc56a304 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:48:42.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5968" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:48:42.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 11 00:48:42.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3224159 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:48:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:48:42.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3224159 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:48:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 11 00:48:52.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3224557 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:48:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:48:52.484: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3224557 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:48:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 11 00:49:02.571: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3224967 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:49:02.572: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3224967 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 11 00:49:12.578: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3225080 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:49:12.578: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-a 383221e9-7bdf-4a8b-8153-1582c430184a 3225080 0 2020-05-11 00:48:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 11 00:49:22.586: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-b f20d11bf-4cbd-40ba-987d-01f139814f8d 3225108 0 2020-05-11 00:49:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:49:22.586: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-b f20d11bf-4cbd-40ba-987d-01f139814f8d 3225108 0 2020-05-11 00:49:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 11 00:49:32.592: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-b f20d11bf-4cbd-40ba-987d-01f139814f8d 3225138 0 2020-05-11 00:49:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 00:49:32.592: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-856 /api/v1/namespaces/watch-856/configmaps/e2e-watch-test-configmap-b f20d11bf-4cbd-40ba-987d-01f139814f8d 3225138 0 2020-05-11 00:49:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 00:49:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:49:42.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-856" for this suite. • [SLOW TEST:60.395 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":183,"skipped":3183,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:49:42.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:49:47.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7259" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":184,"skipped":3189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:49:47.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 00:49:47.526: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 00:49:47.536: INFO: Waiting for terminating namespaces to be deleted... May 11 00:49:47.538: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 00:49:47.541: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 00:49:47.541: INFO: Container kindnet-cni ready: true, restart count 0 May 11 00:49:47.541: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 00:49:47.541: INFO: Container kube-proxy ready: true, restart count 0 May 11 00:49:47.541: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 00:49:47.544: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 00:49:47.544: INFO: Container kindnet-cni ready: true, restart count 0 May 11 00:49:47.544: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 00:49:47.544: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 11 00:49:47.671: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 11 00:49:47.671: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 11 00:49:47.671: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 11 00:49:47.671: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 11 00:49:47.671: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 11 00:49:47.676: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-124eb71e-1b83-47a3-af86-658e4f46b80c.160dd2f3b604a921], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3401/filler-pod-124eb71e-1b83-47a3-af86-658e4f46b80c to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-124eb71e-1b83-47a3-af86-658e4f46b80c.160dd2f403994bfd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-124eb71e-1b83-47a3-af86-658e4f46b80c.160dd2f4641ab0b7], Reason = [Created], Message = [Created container filler-pod-124eb71e-1b83-47a3-af86-658e4f46b80c] STEP: Considering event: Type = [Normal], Name = [filler-pod-124eb71e-1b83-47a3-af86-658e4f46b80c.160dd2f472c8ede0], Reason = [Started], Message = [Started container filler-pod-124eb71e-1b83-47a3-af86-658e4f46b80c] STEP: Considering event: Type = [Normal], Name = [filler-pod-752930e1-0371-442a-aab4-b17c3fb33434.160dd2f3b8293568], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3401/filler-pod-752930e1-0371-442a-aab4-b17c3fb33434 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-752930e1-0371-442a-aab4-b17c3fb33434.160dd2f44647cc8c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-752930e1-0371-442a-aab4-b17c3fb33434.160dd2f47c17c001], Reason = [Created], Message = [Created container filler-pod-752930e1-0371-442a-aab4-b17c3fb33434] STEP: Considering event: Type = [Normal], Name = [filler-pod-752930e1-0371-442a-aab4-b17c3fb33434.160dd2f48b68f49f], Reason = [Started], Message = [Started container filler-pod-752930e1-0371-442a-aab4-b17c3fb33434] STEP: Considering event: Type = [Warning], Name = [additional-pod.160dd2f51f7bb207], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160dd2f5213220ff], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:49:54.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3401" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.455 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":185,"skipped":3220,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:49:54.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-95072117-c4a6-4ea6-90c4-287a488ff567 in namespace container-probe-9446 May 11 00:49:59.001: INFO: Started pod liveness-95072117-c4a6-4ea6-90c4-287a488ff567 in namespace container-probe-9446 STEP: checking the pod's current state and verifying that restartCount is present May 11 00:49:59.005: INFO: Initial restart count of pod liveness-95072117-c4a6-4ea6-90c4-287a488ff567 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:53:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9446" for this suite. • [SLOW TEST:244.861 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":3223,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:53:59.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:54:04.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-83" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":3229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:54:04.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3133 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3133;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3133 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3133;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3133.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3133.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3133.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3133.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3133.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3133.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.63.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.63.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.63.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.63.90_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3133 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3133;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3133 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3133;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3133.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3133.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3133.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3133.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3133.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3133.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3133.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3133.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.63.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.63.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.63.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.63.90_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:54:10.421: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.425: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.428: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.430: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.433: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.435: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.439: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.442: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.466: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.468: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.471: INFO: Unable to read jessie_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.474: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.477: INFO: Unable to read jessie_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.483: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.487: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:10.506: INFO: Lookups using dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3133 wheezy_tcp@dns-test-service.dns-3133 wheezy_udp@dns-test-service.dns-3133.svc wheezy_tcp@dns-test-service.dns-3133.svc wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3133 jessie_tcp@dns-test-service.dns-3133 jessie_udp@dns-test-service.dns-3133.svc jessie_tcp@dns-test-service.dns-3133.svc jessie_udp@_http._tcp.dns-test-service.dns-3133.svc jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc] May 11 00:54:15.513: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.516: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.518: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.520: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.525: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.528: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.530: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.550: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.554: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.567: INFO: Unable to read jessie_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.576: INFO: Unable to read jessie_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.579: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.585: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:15.605: INFO: Lookups using dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3133 wheezy_tcp@dns-test-service.dns-3133 wheezy_udp@dns-test-service.dns-3133.svc wheezy_tcp@dns-test-service.dns-3133.svc wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3133 jessie_tcp@dns-test-service.dns-3133 jessie_udp@dns-test-service.dns-3133.svc jessie_tcp@dns-test-service.dns-3133.svc jessie_udp@_http._tcp.dns-test-service.dns-3133.svc jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc] May 11 00:54:20.511: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.515: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.519: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.522: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.524: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.527: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.529: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.532: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.555: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.559: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.562: INFO: Unable to read jessie_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.568: INFO: Unable to read jessie_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.572: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.575: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.578: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:20.597: INFO: Lookups using dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3133 wheezy_tcp@dns-test-service.dns-3133 wheezy_udp@dns-test-service.dns-3133.svc wheezy_tcp@dns-test-service.dns-3133.svc wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3133 jessie_tcp@dns-test-service.dns-3133 jessie_udp@dns-test-service.dns-3133.svc jessie_tcp@dns-test-service.dns-3133.svc jessie_udp@_http._tcp.dns-test-service.dns-3133.svc jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc] May 11 00:54:25.511: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.515: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.519: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.522: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.524: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.527: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.530: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.532: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.554: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.558: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.560: INFO: Unable to read jessie_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.563: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.566: INFO: Unable to read jessie_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.569: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.573: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.575: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:25.594: INFO: Lookups using dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3133 wheezy_tcp@dns-test-service.dns-3133 wheezy_udp@dns-test-service.dns-3133.svc wheezy_tcp@dns-test-service.dns-3133.svc wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3133 jessie_tcp@dns-test-service.dns-3133 jessie_udp@dns-test-service.dns-3133.svc jessie_tcp@dns-test-service.dns-3133.svc jessie_udp@_http._tcp.dns-test-service.dns-3133.svc jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc] May 11 00:54:30.511: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.515: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.519: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.522: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.536: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.559: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.562: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.564: INFO: Unable to read jessie_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.567: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.569: INFO: Unable to read jessie_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.572: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.574: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:30.595: INFO: Lookups using dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3133 wheezy_tcp@dns-test-service.dns-3133 wheezy_udp@dns-test-service.dns-3133.svc wheezy_tcp@dns-test-service.dns-3133.svc wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3133 jessie_tcp@dns-test-service.dns-3133 jessie_udp@dns-test-service.dns-3133.svc jessie_tcp@dns-test-service.dns-3133.svc jessie_udp@_http._tcp.dns-test-service.dns-3133.svc jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc] May 11 00:54:35.511: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.516: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.520: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.523: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.532: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.534: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.556: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.559: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.562: INFO: Unable to read jessie_udp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133 from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.568: INFO: Unable to read jessie_udp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.571: INFO: Unable to read jessie_tcp@dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.574: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc from pod dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057: the server could not find the requested resource (get pods dns-test-dcf80593-f579-426e-b82f-6053609aa057) May 11 00:54:35.606: INFO: Lookups using dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3133 wheezy_tcp@dns-test-service.dns-3133 wheezy_udp@dns-test-service.dns-3133.svc wheezy_tcp@dns-test-service.dns-3133.svc wheezy_udp@_http._tcp.dns-test-service.dns-3133.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3133.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3133 jessie_tcp@dns-test-service.dns-3133 jessie_udp@dns-test-service.dns-3133.svc jessie_tcp@dns-test-service.dns-3133.svc jessie_udp@_http._tcp.dns-test-service.dns-3133.svc jessie_tcp@_http._tcp.dns-test-service.dns-3133.svc] May 11 00:54:40.599: INFO: DNS probes using dns-3133/dns-test-dcf80593-f579-426e-b82f-6053609aa057 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:54:41.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3133" for this suite. • [SLOW TEST:37.037 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":188,"skipped":3263,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:54:41.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:54:41.270: INFO: Waiting up to 5m0s for pod "busybox-user-65534-39f2b19c-a7a9-4728-9d45-e76cf9269800" in namespace "security-context-test-5078" to be "Succeeded or Failed" May 11 00:54:41.274: INFO: Pod "busybox-user-65534-39f2b19c-a7a9-4728-9d45-e76cf9269800": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966478ms May 11 00:54:43.459: INFO: Pod "busybox-user-65534-39f2b19c-a7a9-4728-9d45-e76cf9269800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189136609s May 11 00:54:45.485: INFO: Pod "busybox-user-65534-39f2b19c-a7a9-4728-9d45-e76cf9269800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214931921s May 11 00:54:45.485: INFO: Pod "busybox-user-65534-39f2b19c-a7a9-4728-9d45-e76cf9269800" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:54:45.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5078" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":189,"skipped":3268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:54:45.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:54:50.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4210" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":190,"skipped":3322,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:54:50.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:55:06.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3177" for this suite. • [SLOW TEST:16.294 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":191,"skipped":3322,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:55:06.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 11 00:55:10.956: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7455 pod-service-account-ab89769e-8479-450d-b77f-2dfac8e911a5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 11 00:55:13.968: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7455 pod-service-account-ab89769e-8479-450d-b77f-2dfac8e911a5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 11 00:55:14.161: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7455 pod-service-account-ab89769e-8479-450d-b77f-2dfac8e911a5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:55:14.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7455" for this suite. • [SLOW TEST:8.023 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":192,"skipped":3325,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:55:14.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 00:55:17.599: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:55:17.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6345" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3332,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:55:17.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7898.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7898.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:55:23.967: INFO: DNS probes using dns-test-3e322461-b761-4191-92cd-b8739740fb14 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7898.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7898.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:55:32.115: INFO: File wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:32.119: INFO: File jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:32.119: INFO: Lookups using dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 failed for: [wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local] May 11 00:55:37.124: INFO: File wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:37.128: INFO: File jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:37.128: INFO: Lookups using dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 failed for: [wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local] May 11 00:55:42.124: INFO: File wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:42.128: INFO: File jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:42.128: INFO: Lookups using dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 failed for: [wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local] May 11 00:55:47.124: INFO: File wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:47.129: INFO: File jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:47.129: INFO: Lookups using dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 failed for: [wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local] May 11 00:55:52.126: INFO: File wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:52.131: INFO: File jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local from pod dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 00:55:52.131: INFO: Lookups using dns-7898/dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 failed for: [wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local] May 11 00:55:57.128: INFO: DNS probes using dns-test-07466b75-9575-4a6d-9f39-2e8a65e7c999 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7898.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7898.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7898.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7898.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 00:56:03.812: INFO: DNS probes using dns-test-6752c51b-e245-4f09-aba7-3593784bfb9f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:03.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7898" for this suite. • [SLOW TEST:46.360 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":194,"skipped":3339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:04.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 00:56:04.465: INFO: Waiting up to 5m0s for pod "downward-api-56a96016-4e67-4303-8f74-363088503a9c" in namespace "downward-api-9733" to be "Succeeded or Failed" May 11 00:56:04.468: INFO: Pod "downward-api-56a96016-4e67-4303-8f74-363088503a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090845ms May 11 00:56:06.472: INFO: Pod "downward-api-56a96016-4e67-4303-8f74-363088503a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006940647s May 11 00:56:08.522: INFO: Pod "downward-api-56a96016-4e67-4303-8f74-363088503a9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056614115s STEP: Saw pod success May 11 00:56:08.522: INFO: Pod "downward-api-56a96016-4e67-4303-8f74-363088503a9c" satisfied condition "Succeeded or Failed" May 11 00:56:08.524: INFO: Trying to get logs from node latest-worker pod downward-api-56a96016-4e67-4303-8f74-363088503a9c container dapi-container: STEP: delete the pod May 11 00:56:08.673: INFO: Waiting for pod downward-api-56a96016-4e67-4303-8f74-363088503a9c to disappear May 11 00:56:08.684: INFO: Pod downward-api-56a96016-4e67-4303-8f74-363088503a9c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:08.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9733" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":195,"skipped":3377,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:08.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 00:56:08.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d" in namespace "downward-api-6448" to be "Succeeded or Failed" May 11 00:56:08.842: INFO: Pod "downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.0479ms May 11 00:56:10.995: INFO: Pod "downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156082092s May 11 00:56:13.006: INFO: Pod "downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166930634s STEP: Saw pod success May 11 00:56:13.006: INFO: Pod "downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d" satisfied condition "Succeeded or Failed" May 11 00:56:13.009: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d container client-container: STEP: delete the pod May 11 00:56:13.097: INFO: Waiting for pod downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d to disappear May 11 00:56:13.144: INFO: Pod downwardapi-volume-286ece6e-ac35-4c13-89e6-5299f22bb81d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:13.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6448" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3379,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:13.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:56:14.330: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:56:16.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755374, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755374, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755374, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755374, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:56:19.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 11 00:56:19.456: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:19.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3656" for this suite. STEP: Destroying namespace "webhook-3656-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.378 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":197,"skipped":3388,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:19.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 00:56:24.242: INFO: Successfully updated pod "pod-update-5cc41c53-3bd7-41c2-ba90-d85a360a5265" STEP: verifying the updated pod is in kubernetes May 11 00:56:24.271: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:24.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9873" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3396,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:24.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 11 00:56:32.382: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2988 PodName:pod-sharedvolume-4748d94a-b84b-4069-af35-d8b04cc4ef40 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 00:56:32.382: INFO: >>> kubeConfig: /root/.kube/config I0511 00:56:32.418039 7 log.go:172] (0xc00234aa50) (0xc002135540) Create stream I0511 00:56:32.418066 7 log.go:172] (0xc00234aa50) (0xc002135540) Stream added, broadcasting: 1 I0511 00:56:32.419938 7 log.go:172] (0xc00234aa50) Reply frame received for 1 I0511 00:56:32.419972 7 log.go:172] (0xc00234aa50) (0xc0025e3b80) Create stream I0511 00:56:32.419982 7 log.go:172] (0xc00234aa50) (0xc0025e3b80) Stream added, broadcasting: 3 I0511 00:56:32.420845 7 log.go:172] (0xc00234aa50) Reply frame received for 3 I0511 00:56:32.420874 7 log.go:172] (0xc00234aa50) (0xc002135680) Create stream I0511 00:56:32.420886 7 log.go:172] (0xc00234aa50) (0xc002135680) Stream added, broadcasting: 5 I0511 00:56:32.421877 7 log.go:172] (0xc00234aa50) Reply frame received for 5 I0511 00:56:32.494363 7 log.go:172] (0xc00234aa50) Data frame received for 5 I0511 00:56:32.494413 7 log.go:172] (0xc002135680) (5) Data frame handling I0511 00:56:32.494437 7 log.go:172] (0xc00234aa50) Data frame received for 3 I0511 00:56:32.494450 7 log.go:172] (0xc0025e3b80) (3) Data frame handling I0511 00:56:32.494461 7 log.go:172] (0xc0025e3b80) (3) Data frame sent I0511 00:56:32.494473 7 log.go:172] (0xc00234aa50) Data frame received for 3 I0511 00:56:32.494492 7 log.go:172] (0xc0025e3b80) (3) Data frame handling I0511 00:56:32.496362 7 log.go:172] (0xc00234aa50) Data frame received for 1 I0511 00:56:32.496414 7 log.go:172] (0xc002135540) (1) Data frame handling I0511 00:56:32.496488 7 log.go:172] (0xc002135540) (1) Data frame sent I0511 00:56:32.496518 7 log.go:172] (0xc00234aa50) (0xc002135540) Stream removed, broadcasting: 1 I0511 00:56:32.496550 7 log.go:172] (0xc00234aa50) Go away received I0511 00:56:32.496703 7 log.go:172] (0xc00234aa50) (0xc002135540) Stream removed, broadcasting: 1 I0511 00:56:32.496730 7 log.go:172] (0xc00234aa50) (0xc0025e3b80) Stream removed, broadcasting: 3 I0511 00:56:32.496741 7 log.go:172] (0xc00234aa50) (0xc002135680) Stream removed, broadcasting: 5 May 11 00:56:32.496: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:32.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2988" for this suite. • [SLOW TEST:8.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":199,"skipped":3397,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:32.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 11 00:56:32.595: INFO: Waiting up to 5m0s for pod "var-expansion-197ff545-3833-4910-9860-f3b43a6b1455" in namespace "var-expansion-700" to be "Succeeded or Failed" May 11 00:56:32.619: INFO: Pod "var-expansion-197ff545-3833-4910-9860-f3b43a6b1455": Phase="Pending", Reason="", readiness=false. Elapsed: 23.717384ms May 11 00:56:34.647: INFO: Pod "var-expansion-197ff545-3833-4910-9860-f3b43a6b1455": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052497861s May 11 00:56:36.713: INFO: Pod "var-expansion-197ff545-3833-4910-9860-f3b43a6b1455": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11831808s STEP: Saw pod success May 11 00:56:36.713: INFO: Pod "var-expansion-197ff545-3833-4910-9860-f3b43a6b1455" satisfied condition "Succeeded or Failed" May 11 00:56:36.717: INFO: Trying to get logs from node latest-worker pod var-expansion-197ff545-3833-4910-9860-f3b43a6b1455 container dapi-container: STEP: delete the pod May 11 00:56:36.794: INFO: Waiting for pod var-expansion-197ff545-3833-4910-9860-f3b43a6b1455 to disappear May 11 00:56:36.844: INFO: Pod var-expansion-197ff545-3833-4910-9860-f3b43a6b1455 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:36.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-700" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3397,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:36.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:36.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5527" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":201,"skipped":3402,"failed":0} SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:36.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 00:56:37.046: INFO: Waiting up to 5m0s for pod "downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e" in namespace "downward-api-3564" to be "Succeeded or Failed" May 11 00:56:37.058: INFO: Pod "downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.149862ms May 11 00:56:39.204: INFO: Pod "downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158120187s May 11 00:56:41.208: INFO: Pod "downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162018413s STEP: Saw pod success May 11 00:56:41.208: INFO: Pod "downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e" satisfied condition "Succeeded or Failed" May 11 00:56:41.211: INFO: Trying to get logs from node latest-worker pod downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e container dapi-container: STEP: delete the pod May 11 00:56:41.247: INFO: Waiting for pod downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e to disappear May 11 00:56:41.261: INFO: Pod downward-api-ff1d5d7f-10d3-4b89-9fb5-47cb10d90f6e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:41.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3564" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3404,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:41.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 00:56:42.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 00:56:44.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755402, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755402, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755402, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755402, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 00:56:47.164: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:56:47.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:56:48.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7752" for this suite. STEP: Destroying namespace "webhook-7752-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.290 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":203,"skipped":3410,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:56:48.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 11 00:58:49.224: INFO: Successfully updated pod "var-expansion-0ee450a4-58c4-4881-9f10-468686520ab6" STEP: waiting for pod running STEP: deleting the pod gracefully May 11 00:58:51.234: INFO: Deleting pod "var-expansion-0ee450a4-58c4-4881-9f10-468686520ab6" in namespace "var-expansion-5240" May 11 00:58:51.240: INFO: Wait up to 5m0s for pod "var-expansion-0ee450a4-58c4-4881-9f10-468686520ab6" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:59:25.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5240" for this suite. • [SLOW TEST:156.716 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":204,"skipped":3420,"failed":0} [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:59:25.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-bfe9b071-bf31-4635-b8c7-bbd1fee681bc STEP: Creating a pod to test consume configMaps May 11 00:59:25.430: INFO: Waiting up to 5m0s for pod "pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31" in namespace "configmap-8522" to be "Succeeded or Failed" May 11 00:59:25.454: INFO: Pod "pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31": Phase="Pending", Reason="", readiness=false. Elapsed: 23.882676ms May 11 00:59:27.480: INFO: Pod "pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050620422s May 11 00:59:29.485: INFO: Pod "pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054917897s STEP: Saw pod success May 11 00:59:29.485: INFO: Pod "pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31" satisfied condition "Succeeded or Failed" May 11 00:59:29.488: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31 container configmap-volume-test: STEP: delete the pod May 11 00:59:29.578: INFO: Waiting for pod pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31 to disappear May 11 00:59:29.589: INFO: Pod pod-configmaps-96cc4431-10e3-496a-8b9d-de8bb81abc31 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:59:29.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8522" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3420,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:59:29.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 00:59:29.653: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 00:59:32.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5129 create -f -' May 11 00:59:35.954: INFO: stderr: "" May 11 00:59:35.954: INFO: stdout: "e2e-test-crd-publish-openapi-9373-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 00:59:35.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5129 delete e2e-test-crd-publish-openapi-9373-crds test-cr' May 11 00:59:36.073: INFO: stderr: "" May 11 00:59:36.073: INFO: stdout: "e2e-test-crd-publish-openapi-9373-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 11 00:59:36.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5129 apply -f -' May 11 00:59:36.333: INFO: stderr: "" May 11 00:59:36.333: INFO: stdout: "e2e-test-crd-publish-openapi-9373-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 00:59:36.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5129 delete e2e-test-crd-publish-openapi-9373-crds test-cr' May 11 00:59:36.432: INFO: stderr: "" May 11 00:59:36.432: INFO: stdout: "e2e-test-crd-publish-openapi-9373-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 00:59:36.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9373-crds' May 11 00:59:36.701: INFO: stderr: "" May 11 00:59:36.701: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9373-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:59:39.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5129" for this suite. • [SLOW TEST:10.039 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":206,"skipped":3446,"failed":0} SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:59:39.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 11 00:59:39.811: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 00:59:54.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9893" for this suite. • [SLOW TEST:15.221 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 00:59:54.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bd4c439e-6de9-4458-b519-dc5746c18aea STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bd4c439e-6de9-4458-b519-dc5746c18aea STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:00:01.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9791" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3495,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:00:01.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-b34034cd-bbc5-4121-9509-53f239997a6c STEP: Creating a pod to test consume configMaps May 11 01:00:01.191: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e" in namespace "projected-861" to be "Succeeded or Failed" May 11 01:00:01.207: INFO: Pod "pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.077644ms May 11 01:00:03.224: INFO: Pod "pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032526423s May 11 01:00:05.227: INFO: Pod "pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036436698s STEP: Saw pod success May 11 01:00:05.227: INFO: Pod "pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e" satisfied condition "Succeeded or Failed" May 11 01:00:05.230: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e container projected-configmap-volume-test: STEP: delete the pod May 11 01:00:05.269: INFO: Waiting for pod pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e to disappear May 11 01:00:05.280: INFO: Pod pod-projected-configmaps-d845e9c3-bdd2-4976-b7fb-d1261554714e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:00:05.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-861" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":209,"skipped":3502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:00:05.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 01:00:05.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867" in namespace "projected-9986" to be "Succeeded or Failed" May 11 01:00:05.418: INFO: Pod "downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867": Phase="Pending", Reason="", readiness=false. Elapsed: 16.457787ms May 11 01:00:07.422: INFO: Pod "downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020720012s May 11 01:00:09.426: INFO: Pod "downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024722796s STEP: Saw pod success May 11 01:00:09.426: INFO: Pod "downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867" satisfied condition "Succeeded or Failed" May 11 01:00:09.429: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867 container client-container: STEP: delete the pod May 11 01:00:09.524: INFO: Waiting for pod downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867 to disappear May 11 01:00:09.532: INFO: Pod downwardapi-volume-732f589b-90ec-4d02-8390-0bdc39cf0867 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:00:09.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9986" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3579,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:00:09.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5227 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-5227 May 11 01:00:09.681: INFO: Found 0 stateful pods, waiting for 1 May 11 01:00:19.686: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 01:00:19.718: INFO: Deleting all statefulset in ns statefulset-5227 May 11 01:00:19.769: INFO: Scaling statefulset ss to 0 May 11 01:00:39.862: INFO: Waiting for statefulset status.replicas updated to 0 May 11 01:00:39.865: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:00:39.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5227" for this suite. • [SLOW TEST:30.349 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":211,"skipped":3590,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:00:39.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 11 01:00:50.033: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.033: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.074842 7 log.go:172] (0xc006a52580) (0xc0017d9d60) Create stream I0511 01:00:50.074878 7 log.go:172] (0xc006a52580) (0xc0017d9d60) Stream added, broadcasting: 1 I0511 01:00:50.076633 7 log.go:172] (0xc006a52580) Reply frame received for 1 I0511 01:00:50.076690 7 log.go:172] (0xc006a52580) (0xc0025e3a40) Create stream I0511 01:00:50.076708 7 log.go:172] (0xc006a52580) (0xc0025e3a40) Stream added, broadcasting: 3 I0511 01:00:50.078038 7 log.go:172] (0xc006a52580) Reply frame received for 3 I0511 01:00:50.078076 7 log.go:172] (0xc006a52580) (0xc0017d9e00) Create stream I0511 01:00:50.078096 7 log.go:172] (0xc006a52580) (0xc0017d9e00) Stream added, broadcasting: 5 I0511 01:00:50.079142 7 log.go:172] (0xc006a52580) Reply frame received for 5 I0511 01:00:50.153458 7 log.go:172] (0xc006a52580) Data frame received for 3 I0511 01:00:50.153489 7 log.go:172] (0xc0025e3a40) (3) Data frame handling I0511 01:00:50.153496 7 log.go:172] (0xc0025e3a40) (3) Data frame sent I0511 01:00:50.153501 7 log.go:172] (0xc006a52580) Data frame received for 3 I0511 01:00:50.153516 7 log.go:172] (0xc0025e3a40) (3) Data frame handling I0511 01:00:50.153527 7 log.go:172] (0xc006a52580) Data frame received for 5 I0511 01:00:50.153535 7 log.go:172] (0xc0017d9e00) (5) Data frame handling I0511 01:00:50.154741 7 log.go:172] (0xc006a52580) Data frame received for 1 I0511 01:00:50.154767 7 log.go:172] (0xc0017d9d60) (1) Data frame handling I0511 01:00:50.154786 7 log.go:172] (0xc0017d9d60) (1) Data frame sent I0511 01:00:50.154800 7 log.go:172] (0xc006a52580) (0xc0017d9d60) Stream removed, broadcasting: 1 I0511 01:00:50.154810 7 log.go:172] (0xc006a52580) Go away received I0511 01:00:50.155016 7 log.go:172] (0xc006a52580) (0xc0017d9d60) Stream removed, broadcasting: 1 I0511 01:00:50.155034 7 log.go:172] (0xc006a52580) (0xc0025e3a40) Stream removed, broadcasting: 3 I0511 01:00:50.155040 7 log.go:172] (0xc006a52580) (0xc0017d9e00) Stream removed, broadcasting: 5 May 11 01:00:50.155: INFO: Exec stderr: "" May 11 01:00:50.155: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.155: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.186266 7 log.go:172] (0xc006a52bb0) (0xc0015a2320) Create stream I0511 01:00:50.186299 7 log.go:172] (0xc006a52bb0) (0xc0015a2320) Stream added, broadcasting: 1 I0511 01:00:50.188523 7 log.go:172] (0xc006a52bb0) Reply frame received for 1 I0511 01:00:50.188586 7 log.go:172] (0xc006a52bb0) (0xc002837cc0) Create stream I0511 01:00:50.188607 7 log.go:172] (0xc006a52bb0) (0xc002837cc0) Stream added, broadcasting: 3 I0511 01:00:50.189718 7 log.go:172] (0xc006a52bb0) Reply frame received for 3 I0511 01:00:50.189765 7 log.go:172] (0xc006a52bb0) (0xc002527a40) Create stream I0511 01:00:50.189786 7 log.go:172] (0xc006a52bb0) (0xc002527a40) Stream added, broadcasting: 5 I0511 01:00:50.190683 7 log.go:172] (0xc006a52bb0) Reply frame received for 5 I0511 01:00:50.266334 7 log.go:172] (0xc006a52bb0) Data frame received for 3 I0511 01:00:50.266401 7 log.go:172] (0xc002837cc0) (3) Data frame handling I0511 01:00:50.266423 7 log.go:172] (0xc002837cc0) (3) Data frame sent I0511 01:00:50.266438 7 log.go:172] (0xc006a52bb0) Data frame received for 3 I0511 01:00:50.266454 7 log.go:172] (0xc002837cc0) (3) Data frame handling I0511 01:00:50.266494 7 log.go:172] (0xc006a52bb0) Data frame received for 5 I0511 01:00:50.266528 7 log.go:172] (0xc002527a40) (5) Data frame handling I0511 01:00:50.267372 7 log.go:172] (0xc006a52bb0) Data frame received for 1 I0511 01:00:50.267394 7 log.go:172] (0xc0015a2320) (1) Data frame handling I0511 01:00:50.267406 7 log.go:172] (0xc0015a2320) (1) Data frame sent I0511 01:00:50.267433 7 log.go:172] (0xc006a52bb0) (0xc0015a2320) Stream removed, broadcasting: 1 I0511 01:00:50.267468 7 log.go:172] (0xc006a52bb0) Go away received I0511 01:00:50.267609 7 log.go:172] (0xc006a52bb0) (0xc0015a2320) Stream removed, broadcasting: 1 I0511 01:00:50.267641 7 log.go:172] (0xc006a52bb0) (0xc002837cc0) Stream removed, broadcasting: 3 I0511 01:00:50.267653 7 log.go:172] (0xc006a52bb0) (0xc002527a40) Stream removed, broadcasting: 5 May 11 01:00:50.267: INFO: Exec stderr: "" May 11 01:00:50.267: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.267: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.303398 7 log.go:172] (0xc006a531e0) (0xc0015a2640) Create stream I0511 01:00:50.303443 7 log.go:172] (0xc006a531e0) (0xc0015a2640) Stream added, broadcasting: 1 I0511 01:00:50.312486 7 log.go:172] (0xc006a531e0) Reply frame received for 1 I0511 01:00:50.312645 7 log.go:172] (0xc006a531e0) (0xc002a34000) Create stream I0511 01:00:50.312713 7 log.go:172] (0xc006a531e0) (0xc002a34000) Stream added, broadcasting: 3 I0511 01:00:50.315067 7 log.go:172] (0xc006a531e0) Reply frame received for 3 I0511 01:00:50.315096 7 log.go:172] (0xc006a531e0) (0xc001126500) Create stream I0511 01:00:50.315106 7 log.go:172] (0xc006a531e0) (0xc001126500) Stream added, broadcasting: 5 I0511 01:00:50.316063 7 log.go:172] (0xc006a531e0) Reply frame received for 5 I0511 01:00:50.398300 7 log.go:172] (0xc006a531e0) Data frame received for 3 I0511 01:00:50.398328 7 log.go:172] (0xc002a34000) (3) Data frame handling I0511 01:00:50.398337 7 log.go:172] (0xc002a34000) (3) Data frame sent I0511 01:00:50.398342 7 log.go:172] (0xc006a531e0) Data frame received for 3 I0511 01:00:50.398347 7 log.go:172] (0xc002a34000) (3) Data frame handling I0511 01:00:50.398364 7 log.go:172] (0xc006a531e0) Data frame received for 5 I0511 01:00:50.398372 7 log.go:172] (0xc001126500) (5) Data frame handling I0511 01:00:50.399546 7 log.go:172] (0xc006a531e0) Data frame received for 1 I0511 01:00:50.399570 7 log.go:172] (0xc0015a2640) (1) Data frame handling I0511 01:00:50.399585 7 log.go:172] (0xc0015a2640) (1) Data frame sent I0511 01:00:50.399599 7 log.go:172] (0xc006a531e0) (0xc0015a2640) Stream removed, broadcasting: 1 I0511 01:00:50.399652 7 log.go:172] (0xc006a531e0) (0xc0015a2640) Stream removed, broadcasting: 1 I0511 01:00:50.399664 7 log.go:172] (0xc006a531e0) (0xc002a34000) Stream removed, broadcasting: 3 I0511 01:00:50.399729 7 log.go:172] (0xc006a531e0) Go away received I0511 01:00:50.399787 7 log.go:172] (0xc006a531e0) (0xc001126500) Stream removed, broadcasting: 5 May 11 01:00:50.399: INFO: Exec stderr: "" May 11 01:00:50.399: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.399: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.423618 7 log.go:172] (0xc002174370) (0xc00149ea00) Create stream I0511 01:00:50.423651 7 log.go:172] (0xc002174370) (0xc00149ea00) Stream added, broadcasting: 1 I0511 01:00:50.426405 7 log.go:172] (0xc002174370) Reply frame received for 1 I0511 01:00:50.426444 7 log.go:172] (0xc002174370) (0xc00149ed20) Create stream I0511 01:00:50.426460 7 log.go:172] (0xc002174370) (0xc00149ed20) Stream added, broadcasting: 3 I0511 01:00:50.427362 7 log.go:172] (0xc002174370) Reply frame received for 3 I0511 01:00:50.427401 7 log.go:172] (0xc002174370) (0xc00149ef00) Create stream I0511 01:00:50.427412 7 log.go:172] (0xc002174370) (0xc00149ef00) Stream added, broadcasting: 5 I0511 01:00:50.428314 7 log.go:172] (0xc002174370) Reply frame received for 5 I0511 01:00:50.508544 7 log.go:172] (0xc002174370) Data frame received for 5 I0511 01:00:50.508579 7 log.go:172] (0xc00149ef00) (5) Data frame handling I0511 01:00:50.508609 7 log.go:172] (0xc002174370) Data frame received for 3 I0511 01:00:50.508636 7 log.go:172] (0xc00149ed20) (3) Data frame handling I0511 01:00:50.508657 7 log.go:172] (0xc00149ed20) (3) Data frame sent I0511 01:00:50.508672 7 log.go:172] (0xc002174370) Data frame received for 3 I0511 01:00:50.508679 7 log.go:172] (0xc00149ed20) (3) Data frame handling I0511 01:00:50.509861 7 log.go:172] (0xc002174370) Data frame received for 1 I0511 01:00:50.509888 7 log.go:172] (0xc00149ea00) (1) Data frame handling I0511 01:00:50.509900 7 log.go:172] (0xc00149ea00) (1) Data frame sent I0511 01:00:50.509919 7 log.go:172] (0xc002174370) (0xc00149ea00) Stream removed, broadcasting: 1 I0511 01:00:50.509986 7 log.go:172] (0xc002174370) Go away received I0511 01:00:50.510055 7 log.go:172] (0xc002174370) (0xc00149ea00) Stream removed, broadcasting: 1 I0511 01:00:50.510078 7 log.go:172] (0xc002174370) (0xc00149ed20) Stream removed, broadcasting: 3 I0511 01:00:50.510087 7 log.go:172] (0xc002174370) (0xc00149ef00) Stream removed, broadcasting: 5 May 11 01:00:50.510: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 11 01:00:50.510: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.510: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.539427 7 log.go:172] (0xc001af8000) (0xc002134460) Create stream I0511 01:00:50.539452 7 log.go:172] (0xc001af8000) (0xc002134460) Stream added, broadcasting: 1 I0511 01:00:50.541775 7 log.go:172] (0xc001af8000) Reply frame received for 1 I0511 01:00:50.541809 7 log.go:172] (0xc001af8000) (0xc00149f0e0) Create stream I0511 01:00:50.541818 7 log.go:172] (0xc001af8000) (0xc00149f0e0) Stream added, broadcasting: 3 I0511 01:00:50.542676 7 log.go:172] (0xc001af8000) Reply frame received for 3 I0511 01:00:50.542710 7 log.go:172] (0xc001af8000) (0xc00149f5e0) Create stream I0511 01:00:50.542723 7 log.go:172] (0xc001af8000) (0xc00149f5e0) Stream added, broadcasting: 5 I0511 01:00:50.543678 7 log.go:172] (0xc001af8000) Reply frame received for 5 I0511 01:00:50.608281 7 log.go:172] (0xc001af8000) Data frame received for 5 I0511 01:00:50.608327 7 log.go:172] (0xc00149f5e0) (5) Data frame handling I0511 01:00:50.608358 7 log.go:172] (0xc001af8000) Data frame received for 3 I0511 01:00:50.608376 7 log.go:172] (0xc00149f0e0) (3) Data frame handling I0511 01:00:50.608394 7 log.go:172] (0xc00149f0e0) (3) Data frame sent I0511 01:00:50.608405 7 log.go:172] (0xc001af8000) Data frame received for 3 I0511 01:00:50.608415 7 log.go:172] (0xc00149f0e0) (3) Data frame handling I0511 01:00:50.609965 7 log.go:172] (0xc001af8000) Data frame received for 1 I0511 01:00:50.609985 7 log.go:172] (0xc002134460) (1) Data frame handling I0511 01:00:50.610004 7 log.go:172] (0xc002134460) (1) Data frame sent I0511 01:00:50.610015 7 log.go:172] (0xc001af8000) (0xc002134460) Stream removed, broadcasting: 1 I0511 01:00:50.610033 7 log.go:172] (0xc001af8000) Go away received I0511 01:00:50.610139 7 log.go:172] (0xc001af8000) (0xc002134460) Stream removed, broadcasting: 1 I0511 01:00:50.610168 7 log.go:172] (0xc001af8000) (0xc00149f0e0) Stream removed, broadcasting: 3 I0511 01:00:50.610183 7 log.go:172] (0xc001af8000) (0xc00149f5e0) Stream removed, broadcasting: 5 May 11 01:00:50.610: INFO: Exec stderr: "" May 11 01:00:50.610: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.610: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.644445 7 log.go:172] (0xc005ebe420) (0xc0017d8be0) Create stream I0511 01:00:50.644479 7 log.go:172] (0xc005ebe420) (0xc0017d8be0) Stream added, broadcasting: 1 I0511 01:00:50.647738 7 log.go:172] (0xc005ebe420) Reply frame received for 1 I0511 01:00:50.647797 7 log.go:172] (0xc005ebe420) (0xc00149f900) Create stream I0511 01:00:50.647818 7 log.go:172] (0xc005ebe420) (0xc00149f900) Stream added, broadcasting: 3 I0511 01:00:50.649077 7 log.go:172] (0xc005ebe420) Reply frame received for 3 I0511 01:00:50.649247 7 log.go:172] (0xc005ebe420) (0xc0017d8c80) Create stream I0511 01:00:50.649262 7 log.go:172] (0xc005ebe420) (0xc0017d8c80) Stream added, broadcasting: 5 I0511 01:00:50.650304 7 log.go:172] (0xc005ebe420) Reply frame received for 5 I0511 01:00:50.736913 7 log.go:172] (0xc005ebe420) Data frame received for 3 I0511 01:00:50.736962 7 log.go:172] (0xc00149f900) (3) Data frame handling I0511 01:00:50.736986 7 log.go:172] (0xc00149f900) (3) Data frame sent I0511 01:00:50.737003 7 log.go:172] (0xc005ebe420) Data frame received for 3 I0511 01:00:50.737027 7 log.go:172] (0xc005ebe420) Data frame received for 5 I0511 01:00:50.737052 7 log.go:172] (0xc0017d8c80) (5) Data frame handling I0511 01:00:50.737088 7 log.go:172] (0xc00149f900) (3) Data frame handling I0511 01:00:50.738953 7 log.go:172] (0xc005ebe420) Data frame received for 1 I0511 01:00:50.738998 7 log.go:172] (0xc0017d8be0) (1) Data frame handling I0511 01:00:50.739027 7 log.go:172] (0xc0017d8be0) (1) Data frame sent I0511 01:00:50.739055 7 log.go:172] (0xc005ebe420) (0xc0017d8be0) Stream removed, broadcasting: 1 I0511 01:00:50.739083 7 log.go:172] (0xc005ebe420) Go away received I0511 01:00:50.739226 7 log.go:172] (0xc005ebe420) (0xc0017d8be0) Stream removed, broadcasting: 1 I0511 01:00:50.739259 7 log.go:172] (0xc005ebe420) (0xc00149f900) Stream removed, broadcasting: 3 I0511 01:00:50.739288 7 log.go:172] (0xc005ebe420) (0xc0017d8c80) Stream removed, broadcasting: 5 May 11 01:00:50.739: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 11 01:00:50.739: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.739: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.775789 7 log.go:172] (0xc0021749a0) (0xc0020820a0) Create stream I0511 01:00:50.775821 7 log.go:172] (0xc0021749a0) (0xc0020820a0) Stream added, broadcasting: 1 I0511 01:00:50.777671 7 log.go:172] (0xc0021749a0) Reply frame received for 1 I0511 01:00:50.777727 7 log.go:172] (0xc0021749a0) (0xc002134640) Create stream I0511 01:00:50.777746 7 log.go:172] (0xc0021749a0) (0xc002134640) Stream added, broadcasting: 3 I0511 01:00:50.778629 7 log.go:172] (0xc0021749a0) Reply frame received for 3 I0511 01:00:50.778671 7 log.go:172] (0xc0021749a0) (0xc0021346e0) Create stream I0511 01:00:50.778687 7 log.go:172] (0xc0021749a0) (0xc0021346e0) Stream added, broadcasting: 5 I0511 01:00:50.779548 7 log.go:172] (0xc0021749a0) Reply frame received for 5 I0511 01:00:50.857041 7 log.go:172] (0xc0021749a0) Data frame received for 5 I0511 01:00:50.857085 7 log.go:172] (0xc0021346e0) (5) Data frame handling I0511 01:00:50.857317 7 log.go:172] (0xc0021749a0) Data frame received for 3 I0511 01:00:50.857353 7 log.go:172] (0xc002134640) (3) Data frame handling I0511 01:00:50.857370 7 log.go:172] (0xc002134640) (3) Data frame sent I0511 01:00:50.857385 7 log.go:172] (0xc0021749a0) Data frame received for 3 I0511 01:00:50.857401 7 log.go:172] (0xc002134640) (3) Data frame handling I0511 01:00:50.859273 7 log.go:172] (0xc0021749a0) Data frame received for 1 I0511 01:00:50.859314 7 log.go:172] (0xc0020820a0) (1) Data frame handling I0511 01:00:50.859341 7 log.go:172] (0xc0020820a0) (1) Data frame sent I0511 01:00:50.859377 7 log.go:172] (0xc0021749a0) (0xc0020820a0) Stream removed, broadcasting: 1 I0511 01:00:50.859397 7 log.go:172] (0xc0021749a0) Go away received I0511 01:00:50.859499 7 log.go:172] (0xc0021749a0) (0xc0020820a0) Stream removed, broadcasting: 1 I0511 01:00:50.859517 7 log.go:172] (0xc0021749a0) (0xc002134640) Stream removed, broadcasting: 3 I0511 01:00:50.859524 7 log.go:172] (0xc0021749a0) (0xc0021346e0) Stream removed, broadcasting: 5 May 11 01:00:50.859: INFO: Exec stderr: "" May 11 01:00:50.859: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.859: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.886999 7 log.go:172] (0xc0029bcdc0) (0xc001126a00) Create stream I0511 01:00:50.887025 7 log.go:172] (0xc0029bcdc0) (0xc001126a00) Stream added, broadcasting: 1 I0511 01:00:50.888908 7 log.go:172] (0xc0029bcdc0) Reply frame received for 1 I0511 01:00:50.888957 7 log.go:172] (0xc0029bcdc0) (0xc0020821e0) Create stream I0511 01:00:50.888976 7 log.go:172] (0xc0029bcdc0) (0xc0020821e0) Stream added, broadcasting: 3 I0511 01:00:50.889989 7 log.go:172] (0xc0029bcdc0) Reply frame received for 3 I0511 01:00:50.890023 7 log.go:172] (0xc0029bcdc0) (0xc002134820) Create stream I0511 01:00:50.890035 7 log.go:172] (0xc0029bcdc0) (0xc002134820) Stream added, broadcasting: 5 I0511 01:00:50.890801 7 log.go:172] (0xc0029bcdc0) Reply frame received for 5 I0511 01:00:50.954959 7 log.go:172] (0xc0029bcdc0) Data frame received for 5 I0511 01:00:50.954986 7 log.go:172] (0xc002134820) (5) Data frame handling I0511 01:00:50.955016 7 log.go:172] (0xc0029bcdc0) Data frame received for 3 I0511 01:00:50.955032 7 log.go:172] (0xc0020821e0) (3) Data frame handling I0511 01:00:50.955046 7 log.go:172] (0xc0020821e0) (3) Data frame sent I0511 01:00:50.955057 7 log.go:172] (0xc0029bcdc0) Data frame received for 3 I0511 01:00:50.955066 7 log.go:172] (0xc0020821e0) (3) Data frame handling I0511 01:00:50.956558 7 log.go:172] (0xc0029bcdc0) Data frame received for 1 I0511 01:00:50.956596 7 log.go:172] (0xc001126a00) (1) Data frame handling I0511 01:00:50.956614 7 log.go:172] (0xc001126a00) (1) Data frame sent I0511 01:00:50.956633 7 log.go:172] (0xc0029bcdc0) (0xc001126a00) Stream removed, broadcasting: 1 I0511 01:00:50.956656 7 log.go:172] (0xc0029bcdc0) Go away received I0511 01:00:50.956784 7 log.go:172] (0xc0029bcdc0) (0xc001126a00) Stream removed, broadcasting: 1 I0511 01:00:50.956811 7 log.go:172] (0xc0029bcdc0) (0xc0020821e0) Stream removed, broadcasting: 3 I0511 01:00:50.956830 7 log.go:172] (0xc0029bcdc0) (0xc002134820) Stream removed, broadcasting: 5 May 11 01:00:50.956: INFO: Exec stderr: "" May 11 01:00:50.956: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:50.956: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:50.988595 7 log.go:172] (0xc005ebebb0) (0xc0017d9400) Create stream I0511 01:00:50.988633 7 log.go:172] (0xc005ebebb0) (0xc0017d9400) Stream added, broadcasting: 1 I0511 01:00:50.991067 7 log.go:172] (0xc005ebebb0) Reply frame received for 1 I0511 01:00:50.991124 7 log.go:172] (0xc005ebebb0) (0xc002134960) Create stream I0511 01:00:50.991149 7 log.go:172] (0xc005ebebb0) (0xc002134960) Stream added, broadcasting: 3 I0511 01:00:50.992062 7 log.go:172] (0xc005ebebb0) Reply frame received for 3 I0511 01:00:50.992126 7 log.go:172] (0xc005ebebb0) (0xc001126b40) Create stream I0511 01:00:50.992145 7 log.go:172] (0xc005ebebb0) (0xc001126b40) Stream added, broadcasting: 5 I0511 01:00:50.993100 7 log.go:172] (0xc005ebebb0) Reply frame received for 5 I0511 01:00:51.064818 7 log.go:172] (0xc005ebebb0) Data frame received for 5 I0511 01:00:51.064848 7 log.go:172] (0xc001126b40) (5) Data frame handling I0511 01:00:51.064869 7 log.go:172] (0xc005ebebb0) Data frame received for 3 I0511 01:00:51.064876 7 log.go:172] (0xc002134960) (3) Data frame handling I0511 01:00:51.064885 7 log.go:172] (0xc002134960) (3) Data frame sent I0511 01:00:51.064892 7 log.go:172] (0xc005ebebb0) Data frame received for 3 I0511 01:00:51.064901 7 log.go:172] (0xc002134960) (3) Data frame handling I0511 01:00:51.066018 7 log.go:172] (0xc005ebebb0) Data frame received for 1 I0511 01:00:51.066036 7 log.go:172] (0xc0017d9400) (1) Data frame handling I0511 01:00:51.066055 7 log.go:172] (0xc0017d9400) (1) Data frame sent I0511 01:00:51.066067 7 log.go:172] (0xc005ebebb0) (0xc0017d9400) Stream removed, broadcasting: 1 I0511 01:00:51.066084 7 log.go:172] (0xc005ebebb0) Go away received I0511 01:00:51.066206 7 log.go:172] (0xc005ebebb0) (0xc0017d9400) Stream removed, broadcasting: 1 I0511 01:00:51.066215 7 log.go:172] (0xc005ebebb0) (0xc002134960) Stream removed, broadcasting: 3 I0511 01:00:51.066220 7 log.go:172] (0xc005ebebb0) (0xc001126b40) Stream removed, broadcasting: 5 May 11 01:00:51.066: INFO: Exec stderr: "" May 11 01:00:51.066: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3297 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:00:51.066: INFO: >>> kubeConfig: /root/.kube/config I0511 01:00:51.093505 7 log.go:172] (0xc002174fd0) (0xc002082500) Create stream I0511 01:00:51.093531 7 log.go:172] (0xc002174fd0) (0xc002082500) Stream added, broadcasting: 1 I0511 01:00:51.097338 7 log.go:172] (0xc002174fd0) Reply frame received for 1 I0511 01:00:51.097424 7 log.go:172] (0xc002174fd0) (0xc002134a00) Create stream I0511 01:00:51.097476 7 log.go:172] (0xc002174fd0) (0xc002134a00) Stream added, broadcasting: 3 I0511 01:00:51.102196 7 log.go:172] (0xc002174fd0) Reply frame received for 3 I0511 01:00:51.102232 7 log.go:172] (0xc002174fd0) (0xc002082640) Create stream I0511 01:00:51.102243 7 log.go:172] (0xc002174fd0) (0xc002082640) Stream added, broadcasting: 5 I0511 01:00:51.103490 7 log.go:172] (0xc002174fd0) Reply frame received for 5 I0511 01:00:51.154571 7 log.go:172] (0xc002174fd0) Data frame received for 5 I0511 01:00:51.154613 7 log.go:172] (0xc002082640) (5) Data frame handling I0511 01:00:51.154663 7 log.go:172] (0xc002174fd0) Data frame received for 3 I0511 01:00:51.154850 7 log.go:172] (0xc002134a00) (3) Data frame handling I0511 01:00:51.154877 7 log.go:172] (0xc002134a00) (3) Data frame sent I0511 01:00:51.154893 7 log.go:172] (0xc002174fd0) Data frame received for 3 I0511 01:00:51.154914 7 log.go:172] (0xc002134a00) (3) Data frame handling I0511 01:00:51.156642 7 log.go:172] (0xc002174fd0) Data frame received for 1 I0511 01:00:51.156661 7 log.go:172] (0xc002082500) (1) Data frame handling I0511 01:00:51.156672 7 log.go:172] (0xc002082500) (1) Data frame sent I0511 01:00:51.156694 7 log.go:172] (0xc002174fd0) (0xc002082500) Stream removed, broadcasting: 1 I0511 01:00:51.156720 7 log.go:172] (0xc002174fd0) Go away received I0511 01:00:51.156854 7 log.go:172] (0xc002174fd0) (0xc002082500) Stream removed, broadcasting: 1 I0511 01:00:51.156891 7 log.go:172] (0xc002174fd0) (0xc002134a00) Stream removed, broadcasting: 3 I0511 01:00:51.156921 7 log.go:172] (0xc002174fd0) (0xc002082640) Stream removed, broadcasting: 5 May 11 01:00:51.156: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:00:51.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3297" for this suite. • [SLOW TEST:11.277 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":212,"skipped":3646,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:00:51.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 11 01:00:51.234: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 11 01:01:02.901: INFO: >>> kubeConfig: /root/.kube/config May 11 01:01:05.836: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:01:16.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5372" for this suite. • [SLOW TEST:25.223 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":213,"skipped":3654,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:01:16.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:01:16.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3811" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":214,"skipped":3672,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:01:16.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 01:01:16.661: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 01:01:16.750: INFO: Waiting for terminating namespaces to be deleted... May 11 01:01:16.757: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 01:01:16.763: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-3297 started at 2020-05-11 01:00:46 +0000 UTC (2 container statuses recorded) May 11 01:01:16.763: INFO: Container busybox-1 ready: true, restart count 0 May 11 01:01:16.763: INFO: Container busybox-2 ready: true, restart count 0 May 11 01:01:16.763: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 01:01:16.763: INFO: Container kindnet-cni ready: true, restart count 0 May 11 01:01:16.763: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 01:01:16.763: INFO: Container kube-proxy ready: true, restart count 0 May 11 01:01:16.763: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 01:01:16.767: INFO: test-pod from e2e-kubelet-etc-hosts-3297 started at 2020-05-11 01:00:40 +0000 UTC (3 container statuses recorded) May 11 01:01:16.767: INFO: Container busybox-1 ready: true, restart count 0 May 11 01:01:16.767: INFO: Container busybox-2 ready: true, restart count 0 May 11 01:01:16.767: INFO: Container busybox-3 ready: true, restart count 0 May 11 01:01:16.767: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 01:01:16.767: INFO: Container kindnet-cni ready: true, restart count 0 May 11 01:01:16.767: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 01:01:16.767: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ed70ce2d-df9a-4d44-ac3b-51cb97622a83 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ed70ce2d-df9a-4d44-ac3b-51cb97622a83 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ed70ce2d-df9a-4d44-ac3b-51cb97622a83 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:01:24.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5475" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.388 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":215,"skipped":3701,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:01:24.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 11 01:01:31.149: INFO: Pod pod-hostip-2ac4bc65-a9a5-4ce8-acd9-f2f3890aed4f has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:01:31.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7112" for this suite. • [SLOW TEST:6.162 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3713,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:01:31.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-fb670952-065e-45e4-80a4-00fe4658d3fc STEP: Creating a pod to test consume secrets May 11 01:01:31.369: INFO: Waiting up to 5m0s for pod "pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76" in namespace "secrets-6498" to be "Succeeded or Failed" May 11 01:01:31.422: INFO: Pod "pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76": Phase="Pending", Reason="", readiness=false. Elapsed: 53.052268ms May 11 01:01:33.493: INFO: Pod "pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123824744s May 11 01:01:35.496: INFO: Pod "pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127579762s STEP: Saw pod success May 11 01:01:35.496: INFO: Pod "pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76" satisfied condition "Succeeded or Failed" May 11 01:01:35.500: INFO: Trying to get logs from node latest-worker pod pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76 container secret-env-test: STEP: delete the pod May 11 01:01:35.561: INFO: Waiting for pod pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76 to disappear May 11 01:01:35.570: INFO: Pod pod-secrets-0498466f-4c0d-43f7-a105-22a7dd251b76 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:01:35.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6498" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3720,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:01:35.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 11 01:01:44.243: INFO: Successfully updated pod "annotationupdate0ed84828-309f-44c9-ae4c-1dfea4020388" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:01:46.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2685" for this suite. • [SLOW TEST:10.713 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":218,"skipped":3724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:01:46.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0511 01:01:58.619257 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 01:01:58.619: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:01:58.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6006" for this suite. • [SLOW TEST:12.333 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":219,"skipped":3747,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:01:58.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 01:01:58.745: INFO: Waiting up to 5m0s for pod "pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7" in namespace "emptydir-7984" to be "Succeeded or Failed" May 11 01:01:58.747: INFO: Pod "pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544331ms May 11 01:02:00.752: INFO: Pod "pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006756132s May 11 01:02:02.756: INFO: Pod "pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011356616s STEP: Saw pod success May 11 01:02:02.756: INFO: Pod "pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7" satisfied condition "Succeeded or Failed" May 11 01:02:02.760: INFO: Trying to get logs from node latest-worker pod pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7 container test-container: STEP: delete the pod May 11 01:02:02.810: INFO: Waiting for pod pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7 to disappear May 11 01:02:02.889: INFO: Pod pod-0d57f782-f486-4a99-82c6-29b1c6f81bf7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:02:02.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7984" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":220,"skipped":3750,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:02:02.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-229d0037-d22a-456e-b927-2d78030ca0ef STEP: Creating a pod to test consume secrets May 11 01:02:03.047: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d" in namespace "projected-7189" to be "Succeeded or Failed" May 11 01:02:03.051: INFO: Pod "pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.32471ms May 11 01:02:05.189: INFO: Pod "pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142110102s May 11 01:02:07.207: INFO: Pod "pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159382307s May 11 01:02:09.210: INFO: Pod "pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162965291s STEP: Saw pod success May 11 01:02:09.210: INFO: Pod "pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d" satisfied condition "Succeeded or Failed" May 11 01:02:09.212: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d container projected-secret-volume-test: STEP: delete the pod May 11 01:02:09.242: INFO: Waiting for pod pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d to disappear May 11 01:02:09.255: INFO: Pod pod-projected-secrets-e4fb40e6-cfae-4777-9835-2196981e4c6d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:02:09.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7189" for this suite. • [SLOW TEST:6.364 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3753,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:02:09.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-x8xx4 in namespace proxy-2705 I0511 01:02:09.435243 7 runners.go:190] Created replication controller with name: proxy-service-x8xx4, namespace: proxy-2705, replica count: 1 I0511 01:02:10.485636 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 01:02:11.485884 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 01:02:12.486112 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 01:02:13.486323 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 01:02:14.486532 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 01:02:15.486804 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 01:02:16.487026 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 01:02:17.487234 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 01:02:18.487506 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 01:02:19.487748 7 runners.go:190] proxy-service-x8xx4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 01:02:19.524: INFO: setup took 10.179751032s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 11 01:02:19.531: INFO: (0) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 6.669329ms) May 11 01:02:19.532: INFO: (0) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 7.127811ms) May 11 01:02:19.532: INFO: (0) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 7.055725ms) May 11 01:02:19.533: INFO: (0) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 8.044101ms) May 11 01:02:19.533: INFO: (0) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 8.482362ms) May 11 01:02:19.533: INFO: (0) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 8.560849ms) May 11 01:02:19.533: INFO: (0) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 8.953457ms) May 11 01:02:19.534: INFO: (0) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 9.640248ms) May 11 01:02:19.535: INFO: (0) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 10.705643ms) May 11 01:02:19.535: INFO: (0) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 10.583411ms) May 11 01:02:19.535: INFO: (0) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 10.924391ms) May 11 01:02:19.539: INFO: (0) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 14.418614ms) May 11 01:02:19.539: INFO: (0) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 14.495693ms) May 11 01:02:19.540: INFO: (0) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 15.959127ms) May 11 01:02:19.540: INFO: (0) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 15.796537ms) May 11 01:02:19.541: INFO: (0) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test (200; 9.579525ms) May 11 01:02:19.551: INFO: (1) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 9.98076ms) May 11 01:02:19.551: INFO: (1) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 9.930116ms) May 11 01:02:19.551: INFO: (1) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 11.889356ms) May 11 01:02:19.554: INFO: (1) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 12.87945ms) May 11 01:02:19.554: INFO: (1) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 13.376109ms) May 11 01:02:19.555: INFO: (1) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 13.471906ms) May 11 01:02:19.555: INFO: (1) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 13.576358ms) May 11 01:02:19.555: INFO: (1) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 13.792776ms) May 11 01:02:19.555: INFO: (1) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 13.647393ms) May 11 01:02:19.555: INFO: (1) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 13.68847ms) May 11 01:02:19.555: INFO: (1) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 13.674476ms) May 11 01:02:19.555: INFO: (1) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 13.800234ms) May 11 01:02:19.560: INFO: (2) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.352668ms) May 11 01:02:19.560: INFO: (2) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 5.317454ms) May 11 01:02:19.560: INFO: (2) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test (200; 5.725927ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.843376ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.837809ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 5.913025ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 6.03989ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 6.022497ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 6.090178ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 6.058723ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 6.135487ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 6.085072ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 6.093803ms) May 11 01:02:19.561: INFO: (2) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 6.173164ms) May 11 01:02:19.564: INFO: (3) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 3.074468ms) May 11 01:02:19.565: INFO: (3) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 3.756332ms) May 11 01:02:19.565: INFO: (3) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 4.169238ms) May 11 01:02:19.566: INFO: (3) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 4.804588ms) May 11 01:02:19.566: INFO: (3) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 4.920231ms) May 11 01:02:19.566: INFO: (3) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 4.964143ms) May 11 01:02:19.566: INFO: (3) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 5.026244ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 5.521971ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.546943ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 5.651967ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 5.607606ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 5.91123ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.976113ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 6.087271ms) May 11 01:02:19.567: INFO: (3) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 6.082686ms) May 11 01:02:19.568: INFO: (3) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 4.713282ms) May 11 01:02:19.572: INFO: (4) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 4.755739ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 5.326972ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 5.412847ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.338503ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.375365ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.351491ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.65693ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 5.618321ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 5.77478ms) May 11 01:02:19.573: INFO: (4) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test (200; 2.504849ms) May 11 01:02:19.576: INFO: (5) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 2.759391ms) May 11 01:02:19.576: INFO: (5) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 2.785847ms) May 11 01:02:19.578: INFO: (5) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.629256ms) May 11 01:02:19.579: INFO: (5) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 4.816486ms) May 11 01:02:19.579: INFO: (5) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 5.519501ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 5.844136ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 6.070828ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 6.085737ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 6.074296ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 6.079644ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 6.09878ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 6.095176ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 6.099802ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 6.137351ms) May 11 01:02:19.580: INFO: (5) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test<... (200; 6.69017ms) May 11 01:02:19.587: INFO: (6) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 6.650644ms) May 11 01:02:19.587: INFO: (6) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 6.731329ms) May 11 01:02:19.587: INFO: (6) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 6.775702ms) May 11 01:02:19.587: INFO: (6) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 6.740317ms) May 11 01:02:19.587: INFO: (6) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test (200; 7.056708ms) May 11 01:02:19.587: INFO: (6) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 7.082794ms) May 11 01:02:19.587: INFO: (6) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 7.07472ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 3.663099ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.798147ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 3.698237ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 3.807782ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 3.800455ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 3.836758ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 3.808184ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.831965ms) May 11 01:02:19.591: INFO: (7) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 3.829786ms) May 11 01:02:19.592: INFO: (7) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 4.789875ms) May 11 01:02:19.594: INFO: (7) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 6.318611ms) May 11 01:02:19.594: INFO: (7) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 6.389665ms) May 11 01:02:19.594: INFO: (7) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 6.296888ms) May 11 01:02:19.594: INFO: (7) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 6.366825ms) May 11 01:02:19.594: INFO: (7) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 6.409646ms) May 11 01:02:19.596: INFO: (8) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test<... (200; 5.182103ms) May 11 01:02:19.599: INFO: (8) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 5.4459ms) May 11 01:02:19.599: INFO: (8) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.519392ms) May 11 01:02:19.599: INFO: (8) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.621097ms) May 11 01:02:19.600: INFO: (8) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 5.732291ms) May 11 01:02:19.600: INFO: (8) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.701828ms) May 11 01:02:19.600: INFO: (8) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.744405ms) May 11 01:02:19.600: INFO: (8) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 5.871582ms) May 11 01:02:19.600: INFO: (8) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 5.843467ms) May 11 01:02:19.601: INFO: (8) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 7.32126ms) May 11 01:02:19.601: INFO: (8) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 7.308931ms) May 11 01:02:19.601: INFO: (8) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 7.419617ms) May 11 01:02:19.601: INFO: (8) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 7.570725ms) May 11 01:02:19.603: INFO: (8) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 8.688084ms) May 11 01:02:19.603: INFO: (8) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 8.708963ms) May 11 01:02:19.605: INFO: (9) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 2.646163ms) May 11 01:02:19.606: INFO: (9) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test<... (200; 3.808279ms) May 11 01:02:19.607: INFO: (9) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 4.084348ms) May 11 01:02:19.607: INFO: (9) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 4.084477ms) May 11 01:02:19.607: INFO: (9) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 4.87661ms) May 11 01:02:19.608: INFO: (9) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.918042ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 6.014417ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 6.15095ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 6.331551ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 6.371424ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 6.226191ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 6.19621ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 6.270481ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 6.275835ms) May 11 01:02:19.609: INFO: (9) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 6.309904ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 3.393101ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 3.394732ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 4.000204ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 4.017482ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test (200; 4.096435ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 4.110371ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.077981ms) May 11 01:02:19.613: INFO: (10) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.266283ms) May 11 01:02:19.614: INFO: (10) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 4.924474ms) May 11 01:02:19.614: INFO: (10) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 4.869784ms) May 11 01:02:19.614: INFO: (10) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 5.028019ms) May 11 01:02:19.614: INFO: (10) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 5.191527ms) May 11 01:02:19.614: INFO: (10) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 5.25633ms) May 11 01:02:19.614: INFO: (10) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.240091ms) May 11 01:02:19.614: INFO: (10) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.326711ms) May 11 01:02:19.617: INFO: (11) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 4.298563ms) May 11 01:02:19.619: INFO: (11) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.605754ms) May 11 01:02:19.619: INFO: (11) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 4.504159ms) May 11 01:02:19.619: INFO: (11) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 4.524612ms) May 11 01:02:19.619: INFO: (11) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 4.569397ms) May 11 01:02:19.619: INFO: (11) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.741441ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 4.974782ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.062529ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 5.103648ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.166554ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 5.288569ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 5.347266ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 5.285018ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 5.683368ms) May 11 01:02:19.620: INFO: (11) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.672654ms) May 11 01:02:19.624: INFO: (12) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.311529ms) May 11 01:02:19.624: INFO: (12) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.395961ms) May 11 01:02:19.624: INFO: (12) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 3.637074ms) May 11 01:02:19.624: INFO: (12) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 3.256088ms) May 11 01:02:19.625: INFO: (12) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 4.34892ms) May 11 01:02:19.625: INFO: (12) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 4.906157ms) May 11 01:02:19.625: INFO: (12) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 4.896898ms) May 11 01:02:19.625: INFO: (12) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 5.070888ms) May 11 01:02:19.626: INFO: (12) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 5.052622ms) May 11 01:02:19.626: INFO: (12) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 5.06818ms) May 11 01:02:19.626: INFO: (12) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 5.037657ms) May 11 01:02:19.626: INFO: (12) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 5.510204ms) May 11 01:02:19.626: INFO: (12) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 5.561359ms) May 11 01:02:19.626: INFO: (12) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.464975ms) May 11 01:02:19.626: INFO: (12) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 3.283384ms) May 11 01:02:19.630: INFO: (13) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.457743ms) May 11 01:02:19.630: INFO: (13) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test (200; 3.702162ms) May 11 01:02:19.630: INFO: (13) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 3.768409ms) May 11 01:02:19.630: INFO: (13) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.979505ms) May 11 01:02:19.630: INFO: (13) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 3.942731ms) May 11 01:02:19.630: INFO: (13) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.088175ms) May 11 01:02:19.631: INFO: (13) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 4.684623ms) May 11 01:02:19.632: INFO: (13) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 5.495859ms) May 11 01:02:19.632: INFO: (13) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 5.700638ms) May 11 01:02:19.632: INFO: (13) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 5.712324ms) May 11 01:02:19.632: INFO: (13) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 5.754415ms) May 11 01:02:19.632: INFO: (13) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 5.81491ms) May 11 01:02:19.632: INFO: (13) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.713096ms) May 11 01:02:19.635: INFO: (14) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 4.399746ms) May 11 01:02:19.636: INFO: (14) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 4.421906ms) May 11 01:02:19.636: INFO: (14) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 4.556767ms) May 11 01:02:19.636: INFO: (14) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 4.452252ms) May 11 01:02:19.636: INFO: (14) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.594786ms) May 11 01:02:19.636: INFO: (14) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 4.594749ms) May 11 01:02:19.636: INFO: (14) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 4.511598ms) May 11 01:02:19.637: INFO: (14) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 4.965656ms) May 11 01:02:19.638: INFO: (14) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.931229ms) May 11 01:02:19.638: INFO: (14) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 5.993415ms) May 11 01:02:19.638: INFO: (14) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 5.981547ms) May 11 01:02:19.638: INFO: (14) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 6.014235ms) May 11 01:02:19.641: INFO: (15) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 2.750771ms) May 11 01:02:19.641: INFO: (15) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 2.866265ms) May 11 01:02:19.642: INFO: (15) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 3.570193ms) May 11 01:02:19.642: INFO: (15) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 4.745194ms) May 11 01:02:19.643: INFO: (15) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 4.832522ms) May 11 01:02:19.643: INFO: (15) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.82727ms) May 11 01:02:19.643: INFO: (15) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 5.018986ms) May 11 01:02:19.643: INFO: (15) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 5.162483ms) May 11 01:02:19.643: INFO: (15) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 5.145947ms) May 11 01:02:19.643: INFO: (15) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.208726ms) May 11 01:02:19.645: INFO: (16) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 1.916418ms) May 11 01:02:19.648: INFO: (16) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.338049ms) May 11 01:02:19.648: INFO: (16) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 4.21936ms) May 11 01:02:19.648: INFO: (16) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.459295ms) May 11 01:02:19.648: INFO: (16) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 4.656424ms) May 11 01:02:19.648: INFO: (16) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 4.794715ms) May 11 01:02:19.648: INFO: (16) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 4.771794ms) May 11 01:02:19.648: INFO: (16) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 4.801648ms) May 11 01:02:19.649: INFO: (16) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 5.794907ms) May 11 01:02:19.650: INFO: (16) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 6.074582ms) May 11 01:02:19.650: INFO: (16) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 6.076009ms) May 11 01:02:19.650: INFO: (16) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 6.187111ms) May 11 01:02:19.650: INFO: (16) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 6.270594ms) May 11 01:02:19.650: INFO: (16) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 6.376388ms) May 11 01:02:19.652: INFO: (17) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: ... (200; 2.499128ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 5.914003ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname2/proxy/: bar (200; 5.959571ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.957057ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 6.149579ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname1/proxy/: tls baz (200; 6.253262ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 6.21801ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/services/https:proxy-service-x8xx4:tlsportname2/proxy/: tls qux (200; 6.306041ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 6.215271ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 6.299695ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname1/proxy/: foo (200; 6.254363ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/services/proxy-service-x8xx4:portname2/proxy/: bar (200; 6.249298ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 6.306039ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 6.600242ms) May 11 01:02:19.656: INFO: (17) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 6.559173ms) May 11 01:02:19.659: INFO: (18) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 2.623684ms) May 11 01:02:19.659: INFO: (18) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 2.936708ms) May 11 01:02:19.660: INFO: (18) /api/v1/namespaces/proxy-2705/services/http:proxy-service-x8xx4:portname1/proxy/: foo (200; 3.5753ms) May 11 01:02:19.660: INFO: (18) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.476453ms) May 11 01:02:19.661: INFO: (18) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.740335ms) May 11 01:02:19.661: INFO: (18) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:1080/proxy/: test<... (200; 4.831842ms) May 11 01:02:19.662: INFO: (18) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 5.042102ms) May 11 01:02:19.662: INFO: (18) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 5.115231ms) May 11 01:02:19.662: INFO: (18) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 5.078579ms) May 11 01:02:19.662: INFO: (18) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 5.132495ms) May 11 01:02:19.662: INFO: (18) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: test<... (200; 3.234746ms) May 11 01:02:19.666: INFO: (19) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:1080/proxy/: ... (200; 3.406189ms) May 11 01:02:19.666: INFO: (19) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.629255ms) May 11 01:02:19.666: INFO: (19) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:460/proxy/: tls baz (200; 3.735059ms) May 11 01:02:19.666: INFO: (19) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 3.686007ms) May 11 01:02:19.666: INFO: (19) /api/v1/namespaces/proxy-2705/pods/http:proxy-service-x8xx4-9wzxk:162/proxy/: bar (200; 3.737871ms) May 11 01:02:19.666: INFO: (19) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:462/proxy/: tls qux (200; 3.712408ms) May 11 01:02:19.666: INFO: (19) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk/proxy/: test (200; 3.992276ms) May 11 01:02:19.667: INFO: (19) /api/v1/namespaces/proxy-2705/pods/proxy-service-x8xx4-9wzxk:160/proxy/: foo (200; 4.700869ms) May 11 01:02:19.667: INFO: (19) /api/v1/namespaces/proxy-2705/pods/https:proxy-service-x8xx4-9wzxk:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 01:02:22.668: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 01:02:24.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755742, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755742, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755742, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755742, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 01:02:27.733: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:02:27.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4096" for this suite. STEP: Destroying namespace "webhook-4096-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.925 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":223,"skipped":3800,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:02:27.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 11 01:02:32.814: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9755 PodName:var-expansion-6566da4f-6b5e-4462-bff5-c9f1be089464 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:02:32.814: INFO: >>> kubeConfig: /root/.kube/config I0511 01:02:32.849499 7 log.go:172] (0xc002175080) (0xc000185d60) Create stream I0511 01:02:32.849534 7 log.go:172] (0xc002175080) (0xc000185d60) Stream added, broadcasting: 1 I0511 01:02:32.851194 7 log.go:172] (0xc002175080) Reply frame received for 1 I0511 01:02:32.851255 7 log.go:172] (0xc002175080) (0xc000bb92c0) Create stream I0511 01:02:32.851274 7 log.go:172] (0xc002175080) (0xc000bb92c0) Stream added, broadcasting: 3 I0511 01:02:32.852220 7 log.go:172] (0xc002175080) Reply frame received for 3 I0511 01:02:32.852253 7 log.go:172] (0xc002175080) (0xc00120b7c0) Create stream I0511 01:02:32.852263 7 log.go:172] (0xc002175080) (0xc00120b7c0) Stream added, broadcasting: 5 I0511 01:02:32.853063 7 log.go:172] (0xc002175080) Reply frame received for 5 I0511 01:02:32.961374 7 log.go:172] (0xc002175080) Data frame received for 3 I0511 01:02:32.961409 7 log.go:172] (0xc000bb92c0) (3) Data frame handling I0511 01:02:32.961456 7 log.go:172] (0xc002175080) Data frame received for 5 I0511 01:02:32.961487 7 log.go:172] (0xc00120b7c0) (5) Data frame handling I0511 01:02:32.962892 7 log.go:172] (0xc002175080) Data frame received for 1 I0511 01:02:32.962916 7 log.go:172] (0xc000185d60) (1) Data frame handling I0511 01:02:32.962931 7 log.go:172] (0xc000185d60) (1) Data frame sent I0511 01:02:32.962944 7 log.go:172] (0xc002175080) (0xc000185d60) Stream removed, broadcasting: 1 I0511 01:02:32.962959 7 log.go:172] (0xc002175080) Go away received I0511 01:02:32.963087 7 log.go:172] (0xc002175080) (0xc000185d60) Stream removed, broadcasting: 1 I0511 01:02:32.963117 7 log.go:172] (0xc002175080) (0xc000bb92c0) Stream removed, broadcasting: 3 I0511 01:02:32.963131 7 log.go:172] (0xc002175080) (0xc00120b7c0) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 11 01:02:32.982: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9755 PodName:var-expansion-6566da4f-6b5e-4462-bff5-c9f1be089464 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:02:32.982: INFO: >>> kubeConfig: /root/.kube/config I0511 01:02:33.014462 7 log.go:172] (0xc0021756b0) (0xc0003e0aa0) Create stream I0511 01:02:33.014494 7 log.go:172] (0xc0021756b0) (0xc0003e0aa0) Stream added, broadcasting: 1 I0511 01:02:33.015966 7 log.go:172] (0xc0021756b0) Reply frame received for 1 I0511 01:02:33.015995 7 log.go:172] (0xc0021756b0) (0xc00120b860) Create stream I0511 01:02:33.016003 7 log.go:172] (0xc0021756b0) (0xc00120b860) Stream added, broadcasting: 3 I0511 01:02:33.017041 7 log.go:172] (0xc0021756b0) Reply frame received for 3 I0511 01:02:33.017063 7 log.go:172] (0xc0021756b0) (0xc0003e0d20) Create stream I0511 01:02:33.017069 7 log.go:172] (0xc0021756b0) (0xc0003e0d20) Stream added, broadcasting: 5 I0511 01:02:33.018182 7 log.go:172] (0xc0021756b0) Reply frame received for 5 I0511 01:02:33.075353 7 log.go:172] (0xc0021756b0) Data frame received for 5 I0511 01:02:33.075407 7 log.go:172] (0xc0003e0d20) (5) Data frame handling I0511 01:02:33.075438 7 log.go:172] (0xc0021756b0) Data frame received for 3 I0511 01:02:33.075463 7 log.go:172] (0xc00120b860) (3) Data frame handling I0511 01:02:33.076730 7 log.go:172] (0xc0021756b0) Data frame received for 1 I0511 01:02:33.076748 7 log.go:172] (0xc0003e0aa0) (1) Data frame handling I0511 01:02:33.076784 7 log.go:172] (0xc0003e0aa0) (1) Data frame sent I0511 01:02:33.076798 7 log.go:172] (0xc0021756b0) (0xc0003e0aa0) Stream removed, broadcasting: 1 I0511 01:02:33.076965 7 log.go:172] (0xc0021756b0) Go away received I0511 01:02:33.076999 7 log.go:172] (0xc0021756b0) (0xc0003e0aa0) Stream removed, broadcasting: 1 I0511 01:02:33.077023 7 log.go:172] (0xc0021756b0) (0xc00120b860) Stream removed, broadcasting: 3 I0511 01:02:33.077034 7 log.go:172] (0xc0021756b0) (0xc0003e0d20) Stream removed, broadcasting: 5 STEP: updating the annotation value May 11 01:02:33.588: INFO: Successfully updated pod "var-expansion-6566da4f-6b5e-4462-bff5-c9f1be089464" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 11 01:02:33.609: INFO: Deleting pod "var-expansion-6566da4f-6b5e-4462-bff5-c9f1be089464" in namespace "var-expansion-9755" May 11 01:02:33.614: INFO: Wait up to 5m0s for pod "var-expansion-6566da4f-6b5e-4462-bff5-c9f1be089464" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:03:15.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9755" for this suite. • [SLOW TEST:47.699 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":224,"skipped":3803,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:03:15.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 01:03:15.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8005' May 11 01:03:15.871: INFO: stderr: "" May 11 01:03:15.871: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 11 01:03:20.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8005 -o json' May 11 01:03:21.039: INFO: stderr: "" May 11 01:03:21.039: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-11T01:03:15Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-11T01:03:15Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.232\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-11T01:03:19Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8005\",\n \"resourceVersion\": \"3229277\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8005/pods/e2e-test-httpd-pod\",\n \"uid\": \"bd168109-1398-4493-86ef-18b1a5f696af\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-w25lm\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-w25lm\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-w25lm\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T01:03:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T01:03:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T01:03:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T01:03:15Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://85f384a587da67f529980053f46084ad5c7ab290c868b55ec7a406ca37a7aa91\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-11T01:03:18Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.232\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.232\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-11T01:03:15Z\"\n }\n}\n" STEP: replace the image in the pod May 11 01:03:21.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8005' May 11 01:03:21.333: INFO: stderr: "" May 11 01:03:21.333: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 11 01:03:21.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8005' May 11 01:03:35.229: INFO: stderr: "" May 11 01:03:35.229: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:03:35.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8005" for this suite. • [SLOW TEST:19.572 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":225,"skipped":3808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:03:35.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 11 01:03:40.453: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:03:40.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6404" for this suite. • [SLOW TEST:5.396 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":226,"skipped":3843,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:03:40.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:03:51.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2098" for this suite. • [SLOW TEST:11.277 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":227,"skipped":3853,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:03:51.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:03:52.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9584" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":228,"skipped":3891,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:03:52.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-75ea523f-88d7-46d4-9d8e-3d27f4e320ea STEP: Creating a pod to test consume secrets May 11 01:03:52.243: INFO: Waiting up to 5m0s for pod "pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2" in namespace "secrets-9049" to be "Succeeded or Failed" May 11 01:03:52.245: INFO: Pod "pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.560046ms May 11 01:03:54.412: INFO: Pod "pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168735946s May 11 01:03:56.417: INFO: Pod "pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173950883s May 11 01:03:58.435: INFO: Pod "pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.192329876s STEP: Saw pod success May 11 01:03:58.435: INFO: Pod "pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2" satisfied condition "Succeeded or Failed" May 11 01:03:58.438: INFO: Trying to get logs from node latest-worker pod pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2 container secret-volume-test: STEP: delete the pod May 11 01:03:58.495: INFO: Waiting for pod pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2 to disappear May 11 01:03:58.505: INFO: Pod pod-secrets-754dfeb8-d1c2-4995-a59e-dc3d429960e2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:03:58.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9049" for this suite. • [SLOW TEST:6.448 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3896,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:03:58.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 01:04:00.016: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 01:04:02.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755840, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755840, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755840, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724755839, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 01:04:06.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 11 01:04:10.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-438 to-be-attached-pod -i -c=container1' May 11 01:04:10.846: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:04:10.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-438" for this suite. STEP: Destroying namespace "webhook-438-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.426 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":230,"skipped":3912,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:04:10.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9557 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 11 01:04:11.081: INFO: Found 0 stateful pods, waiting for 3 May 11 01:04:21.087: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 01:04:21.087: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 01:04:21.087: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 01:04:31.087: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 01:04:31.087: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 01:04:31.087: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 11 01:04:31.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9557 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 01:04:31.356: INFO: stderr: "I0511 01:04:31.230633 2412 log.go:172] (0xc000a31a20) (0xc000860e60) Create stream\nI0511 01:04:31.230691 2412 log.go:172] (0xc000a31a20) (0xc000860e60) Stream added, broadcasting: 1\nI0511 01:04:31.235182 2412 log.go:172] (0xc000a31a20) Reply frame received for 1\nI0511 01:04:31.235218 2412 log.go:172] (0xc000a31a20) (0xc000805ae0) Create stream\nI0511 01:04:31.235227 2412 log.go:172] (0xc000a31a20) (0xc000805ae0) Stream added, broadcasting: 3\nI0511 01:04:31.236232 2412 log.go:172] (0xc000a31a20) Reply frame received for 3\nI0511 01:04:31.236273 2412 log.go:172] (0xc000a31a20) (0xc00070cbe0) Create stream\nI0511 01:04:31.236284 2412 log.go:172] (0xc000a31a20) (0xc00070cbe0) Stream added, broadcasting: 5\nI0511 01:04:31.237549 2412 log.go:172] (0xc000a31a20) Reply frame received for 5\nI0511 01:04:31.320215 2412 log.go:172] (0xc000a31a20) Data frame received for 5\nI0511 01:04:31.320243 2412 log.go:172] (0xc00070cbe0) (5) Data frame handling\nI0511 01:04:31.320265 2412 log.go:172] (0xc00070cbe0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 01:04:31.348042 2412 log.go:172] (0xc000a31a20) Data frame received for 3\nI0511 01:04:31.348098 2412 log.go:172] (0xc000805ae0) (3) Data frame handling\nI0511 01:04:31.348115 2412 log.go:172] (0xc000805ae0) (3) Data frame sent\nI0511 01:04:31.348127 2412 log.go:172] (0xc000a31a20) Data frame received for 3\nI0511 01:04:31.348137 2412 log.go:172] (0xc000805ae0) (3) Data frame handling\nI0511 01:04:31.348183 2412 log.go:172] (0xc000a31a20) Data frame received for 5\nI0511 01:04:31.348214 2412 log.go:172] (0xc00070cbe0) (5) Data frame handling\nI0511 01:04:31.350245 2412 log.go:172] (0xc000a31a20) Data frame received for 1\nI0511 01:04:31.350283 2412 log.go:172] (0xc000860e60) (1) Data frame handling\nI0511 01:04:31.350306 2412 log.go:172] (0xc000860e60) (1) Data frame sent\nI0511 01:04:31.350331 2412 log.go:172] (0xc000a31a20) (0xc000860e60) Stream removed, broadcasting: 1\nI0511 01:04:31.350629 2412 log.go:172] (0xc000a31a20) Go away received\nI0511 01:04:31.350811 2412 log.go:172] (0xc000a31a20) (0xc000860e60) Stream removed, broadcasting: 1\nI0511 01:04:31.350854 2412 log.go:172] (0xc000a31a20) (0xc000805ae0) Stream removed, broadcasting: 3\nI0511 01:04:31.350879 2412 log.go:172] (0xc000a31a20) (0xc00070cbe0) Stream removed, broadcasting: 5\n" May 11 01:04:31.356: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 01:04:31.356: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 01:04:41.387: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 11 01:04:51.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9557 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 01:04:51.673: INFO: stderr: "I0511 01:04:51.576748 2432 log.go:172] (0xc0007889a0) (0xc000431220) Create stream\nI0511 01:04:51.576815 2432 log.go:172] (0xc0007889a0) (0xc000431220) Stream added, broadcasting: 1\nI0511 01:04:51.579809 2432 log.go:172] (0xc0007889a0) Reply frame received for 1\nI0511 01:04:51.579871 2432 log.go:172] (0xc0007889a0) (0xc00002a0a0) Create stream\nI0511 01:04:51.579890 2432 log.go:172] (0xc0007889a0) (0xc00002a0a0) Stream added, broadcasting: 3\nI0511 01:04:51.581012 2432 log.go:172] (0xc0007889a0) Reply frame received for 3\nI0511 01:04:51.581058 2432 log.go:172] (0xc0007889a0) (0xc0003381e0) Create stream\nI0511 01:04:51.581078 2432 log.go:172] (0xc0007889a0) (0xc0003381e0) Stream added, broadcasting: 5\nI0511 01:04:51.582155 2432 log.go:172] (0xc0007889a0) Reply frame received for 5\nI0511 01:04:51.665889 2432 log.go:172] (0xc0007889a0) Data frame received for 3\nI0511 01:04:51.665938 2432 log.go:172] (0xc00002a0a0) (3) Data frame handling\nI0511 01:04:51.665975 2432 log.go:172] (0xc00002a0a0) (3) Data frame sent\nI0511 01:04:51.666166 2432 log.go:172] (0xc0007889a0) Data frame received for 5\nI0511 01:04:51.666206 2432 log.go:172] (0xc0003381e0) (5) Data frame handling\nI0511 01:04:51.666222 2432 log.go:172] (0xc0003381e0) (5) Data frame sent\nI0511 01:04:51.666234 2432 log.go:172] (0xc0007889a0) Data frame received for 5\nI0511 01:04:51.666245 2432 log.go:172] (0xc0003381e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 01:04:51.666275 2432 log.go:172] (0xc0007889a0) Data frame received for 3\nI0511 01:04:51.666292 2432 log.go:172] (0xc00002a0a0) (3) Data frame handling\nI0511 01:04:51.668042 2432 log.go:172] (0xc0007889a0) Data frame received for 1\nI0511 01:04:51.668073 2432 log.go:172] (0xc000431220) (1) Data frame handling\nI0511 01:04:51.668096 2432 log.go:172] (0xc000431220) (1) Data frame sent\nI0511 01:04:51.668126 2432 log.go:172] (0xc0007889a0) (0xc000431220) Stream removed, broadcasting: 1\nI0511 01:04:51.668151 2432 log.go:172] (0xc0007889a0) Go away received\nI0511 01:04:51.668644 2432 log.go:172] (0xc0007889a0) (0xc000431220) Stream removed, broadcasting: 1\nI0511 01:04:51.668670 2432 log.go:172] (0xc0007889a0) (0xc00002a0a0) Stream removed, broadcasting: 3\nI0511 01:04:51.668686 2432 log.go:172] (0xc0007889a0) (0xc0003381e0) Stream removed, broadcasting: 5\n" May 11 01:04:51.673: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 01:04:51.673: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 01:05:01.694: INFO: Waiting for StatefulSet statefulset-9557/ss2 to complete update May 11 01:05:01.694: INFO: Waiting for Pod statefulset-9557/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 01:05:01.694: INFO: Waiting for Pod statefulset-9557/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 01:05:01.694: INFO: Waiting for Pod statefulset-9557/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 01:05:11.704: INFO: Waiting for StatefulSet statefulset-9557/ss2 to complete update May 11 01:05:11.704: INFO: Waiting for Pod statefulset-9557/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 01:05:11.704: INFO: Waiting for Pod statefulset-9557/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 01:05:21.755: INFO: Waiting for StatefulSet statefulset-9557/ss2 to complete update STEP: Rolling back to a previous revision May 11 01:05:31.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9557 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 01:05:31.986: INFO: stderr: "I0511 01:05:31.849381 2451 log.go:172] (0xc00003ac60) (0xc000524000) Create stream\nI0511 01:05:31.849440 2451 log.go:172] (0xc00003ac60) (0xc000524000) Stream added, broadcasting: 1\nI0511 01:05:31.852127 2451 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0511 01:05:31.852174 2451 log.go:172] (0xc00003ac60) (0xc00036c500) Create stream\nI0511 01:05:31.852183 2451 log.go:172] (0xc00003ac60) (0xc00036c500) Stream added, broadcasting: 3\nI0511 01:05:31.853335 2451 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0511 01:05:31.853520 2451 log.go:172] (0xc00003ac60) (0xc000524780) Create stream\nI0511 01:05:31.853535 2451 log.go:172] (0xc00003ac60) (0xc000524780) Stream added, broadcasting: 5\nI0511 01:05:31.854538 2451 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0511 01:05:31.943599 2451 log.go:172] (0xc00003ac60) Data frame received for 5\nI0511 01:05:31.943620 2451 log.go:172] (0xc000524780) (5) Data frame handling\nI0511 01:05:31.943631 2451 log.go:172] (0xc000524780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 01:05:31.978102 2451 log.go:172] (0xc00003ac60) Data frame received for 5\nI0511 01:05:31.978148 2451 log.go:172] (0xc000524780) (5) Data frame handling\nI0511 01:05:31.978192 2451 log.go:172] (0xc00003ac60) Data frame received for 3\nI0511 01:05:31.978227 2451 log.go:172] (0xc00036c500) (3) Data frame handling\nI0511 01:05:31.978246 2451 log.go:172] (0xc00036c500) (3) Data frame sent\nI0511 01:05:31.978254 2451 log.go:172] (0xc00003ac60) Data frame received for 3\nI0511 01:05:31.978259 2451 log.go:172] (0xc00036c500) (3) Data frame handling\nI0511 01:05:31.980180 2451 log.go:172] (0xc00003ac60) Data frame received for 1\nI0511 01:05:31.980198 2451 log.go:172] (0xc000524000) (1) Data frame handling\nI0511 01:05:31.980208 2451 log.go:172] (0xc000524000) (1) Data frame sent\nI0511 01:05:31.980220 2451 log.go:172] (0xc00003ac60) (0xc000524000) Stream removed, broadcasting: 1\nI0511 01:05:31.980282 2451 log.go:172] (0xc00003ac60) Go away received\nI0511 01:05:31.980496 2451 log.go:172] (0xc00003ac60) (0xc000524000) Stream removed, broadcasting: 1\nI0511 01:05:31.980520 2451 log.go:172] (0xc00003ac60) (0xc00036c500) Stream removed, broadcasting: 3\nI0511 01:05:31.980529 2451 log.go:172] (0xc00003ac60) (0xc000524780) Stream removed, broadcasting: 5\n" May 11 01:05:31.986: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 01:05:31.986: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 01:05:42.020: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 11 01:05:52.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9557 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 01:05:52.313: INFO: stderr: "I0511 01:05:52.201509 2472 log.go:172] (0xc000953340) (0xc000bbc6e0) Create stream\nI0511 01:05:52.201581 2472 log.go:172] (0xc000953340) (0xc000bbc6e0) Stream added, broadcasting: 1\nI0511 01:05:52.205758 2472 log.go:172] (0xc000953340) Reply frame received for 1\nI0511 01:05:52.205802 2472 log.go:172] (0xc000953340) (0xc0003c6dc0) Create stream\nI0511 01:05:52.205814 2472 log.go:172] (0xc000953340) (0xc0003c6dc0) Stream added, broadcasting: 3\nI0511 01:05:52.206616 2472 log.go:172] (0xc000953340) Reply frame received for 3\nI0511 01:05:52.206652 2472 log.go:172] (0xc000953340) (0xc0000f3040) Create stream\nI0511 01:05:52.206665 2472 log.go:172] (0xc000953340) (0xc0000f3040) Stream added, broadcasting: 5\nI0511 01:05:52.207731 2472 log.go:172] (0xc000953340) Reply frame received for 5\nI0511 01:05:52.306514 2472 log.go:172] (0xc000953340) Data frame received for 5\nI0511 01:05:52.306557 2472 log.go:172] (0xc0000f3040) (5) Data frame handling\nI0511 01:05:52.306589 2472 log.go:172] (0xc000953340) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 01:05:52.306615 2472 log.go:172] (0xc0003c6dc0) (3) Data frame handling\nI0511 01:05:52.306630 2472 log.go:172] (0xc0003c6dc0) (3) Data frame sent\nI0511 01:05:52.306643 2472 log.go:172] (0xc000953340) Data frame received for 3\nI0511 01:05:52.306652 2472 log.go:172] (0xc0003c6dc0) (3) Data frame handling\nI0511 01:05:52.306666 2472 log.go:172] (0xc0000f3040) (5) Data frame sent\nI0511 01:05:52.306679 2472 log.go:172] (0xc000953340) Data frame received for 5\nI0511 01:05:52.306688 2472 log.go:172] (0xc0000f3040) (5) Data frame handling\nI0511 01:05:52.308380 2472 log.go:172] (0xc000953340) Data frame received for 1\nI0511 01:05:52.308406 2472 log.go:172] (0xc000bbc6e0) (1) Data frame handling\nI0511 01:05:52.308416 2472 log.go:172] (0xc000bbc6e0) (1) Data frame sent\nI0511 01:05:52.308431 2472 log.go:172] (0xc000953340) (0xc000bbc6e0) Stream removed, broadcasting: 1\nI0511 01:05:52.308445 2472 log.go:172] (0xc000953340) Go away received\nI0511 01:05:52.308935 2472 log.go:172] (0xc000953340) (0xc000bbc6e0) Stream removed, broadcasting: 1\nI0511 01:05:52.308966 2472 log.go:172] (0xc000953340) (0xc0003c6dc0) Stream removed, broadcasting: 3\nI0511 01:05:52.308987 2472 log.go:172] (0xc000953340) (0xc0000f3040) Stream removed, broadcasting: 5\n" May 11 01:05:52.313: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 01:05:52.313: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 01:06:02.334: INFO: Waiting for StatefulSet statefulset-9557/ss2 to complete update May 11 01:06:02.334: INFO: Waiting for Pod statefulset-9557/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 01:06:02.334: INFO: Waiting for Pod statefulset-9557/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 01:06:02.334: INFO: Waiting for Pod statefulset-9557/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 01:06:12.342: INFO: Waiting for StatefulSet statefulset-9557/ss2 to complete update May 11 01:06:12.342: INFO: Waiting for Pod statefulset-9557/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 01:06:22.343: INFO: Waiting for StatefulSet statefulset-9557/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 01:06:32.343: INFO: Deleting all statefulset in ns statefulset-9557 May 11 01:06:32.346: INFO: Scaling statefulset ss2 to 0 May 11 01:07:02.365: INFO: Waiting for statefulset status.replicas updated to 0 May 11 01:07:02.368: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:02.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9557" for this suite. • [SLOW TEST:171.450 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":231,"skipped":3933,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:02.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:02.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2292" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":232,"skipped":3995,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:02.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:07:02.614: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 01:07:07.622: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 01:07:07.622: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 01:07:11.698: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7758 /apis/apps/v1/namespaces/deployment-7758/deployments/test-cleanup-deployment def2f80b-9e06-46c4-a405-09f2d34b4c2a 3230622 1 2020-05-11 01:07:07 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-05-11 01:07:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 01:07:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005ed1ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 01:07:07 +0000 UTC,LastTransitionTime:2020-05-11 01:07:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-6688745694" has successfully progressed.,LastUpdateTime:2020-05-11 01:07:11 +0000 UTC,LastTransitionTime:2020-05-11 01:07:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 01:07:11.702: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-7758 /apis/apps/v1/namespaces/deployment-7758/replicasets/test-cleanup-deployment-6688745694 67217910-e952-475c-86ba-23dadc9ca793 3230611 1 2020-05-11 01:07:07 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment def2f80b-9e06-46c4-a405-09f2d34b4c2a 0xc005ed1f27 0xc005ed1f28}] [] [{kube-controller-manager Update apps/v1 2020-05-11 01:07:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"def2f80b-9e06-46c4-a405-09f2d34b4c2a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005ed1fb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 01:07:11.706: INFO: Pod "test-cleanup-deployment-6688745694-4r98k" is available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-4r98k test-cleanup-deployment-6688745694- deployment-7758 /api/v1/namespaces/deployment-7758/pods/test-cleanup-deployment-6688745694-4r98k 29719131-8951-4424-a7e1-76a1dd4e49ba 3230610 0 2020-05-11 01:07:07 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 67217910-e952-475c-86ba-23dadc9ca793 0xc00371d977 0xc00371d978}] [] [{kube-controller-manager Update v1 2020-05-11 01:07:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67217910-e952-475c-86ba-23dadc9ca793\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.158\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tgd9t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tgd9t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tgd9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:07:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:07:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:07:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:07:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.158,StartTime:2020-05-11 01:07:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:07:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://d2ef7df30e344e3bfe5118ab860d032aee83d1a145473f46d0e6e8015609618f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:11.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7758" for this suite. • [SLOW TEST:9.203 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":233,"skipped":3997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:11.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:07:12.048: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2c7be7e4-b34e-4b55-9489-aef1e8870824", Controller:(*bool)(0xc00315f53a), BlockOwnerDeletion:(*bool)(0xc00315f53b)}} May 11 01:07:12.075: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8cf57efb-d577-42b4-89ca-7ed8861092ac", Controller:(*bool)(0xc0024f1c72), BlockOwnerDeletion:(*bool)(0xc0024f1c73)}} May 11 01:07:12.127: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0338be5d-db50-4a74-a23d-1ce6d55511d3", Controller:(*bool)(0xc00360ed62), BlockOwnerDeletion:(*bool)(0xc00360ed63)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:17.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1820" for this suite. • [SLOW TEST:5.523 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":234,"skipped":4046,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:17.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 11 01:07:17.393: INFO: Waiting up to 5m0s for pod "var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c" in namespace "var-expansion-7252" to be "Succeeded or Failed" May 11 01:07:17.585: INFO: Pod "var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c": Phase="Pending", Reason="", readiness=false. Elapsed: 191.578268ms May 11 01:07:19.589: INFO: Pod "var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196106057s May 11 01:07:21.604: INFO: Pod "var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.210969294s STEP: Saw pod success May 11 01:07:21.604: INFO: Pod "var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c" satisfied condition "Succeeded or Failed" May 11 01:07:21.607: INFO: Trying to get logs from node latest-worker2 pod var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c container dapi-container: STEP: delete the pod May 11 01:07:21.651: INFO: Waiting for pod var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c to disappear May 11 01:07:21.660: INFO: Pod var-expansion-339bd019-d02c-4620-a84c-d7d0c631cb0c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:21.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7252" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":235,"skipped":4046,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:21.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 11 01:07:21.784: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1265 /api/v1/namespaces/watch-1265/configmaps/e2e-watch-test-resource-version 877d2a1c-a115-4d4c-9b1c-f3a8494f5058 3230734 0 2020-05-11 01:07:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-11 01:07:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 01:07:21.784: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1265 /api/v1/namespaces/watch-1265/configmaps/e2e-watch-test-resource-version 877d2a1c-a115-4d4c-9b1c-f3a8494f5058 3230735 0 2020-05-11 01:07:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-11 01:07:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:21.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1265" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":236,"skipped":4051,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:21.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 01:07:22.470: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 01:07:26.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 01:07:28.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756042, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 01:07:31.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:07:31.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3034-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:32.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2507" for this suite. STEP: Destroying namespace "webhook-2507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.581 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":237,"skipped":4052,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:32.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 01:07:32.571: INFO: Waiting up to 5m0s for pod "pod-9f6fa648-a35d-4538-8285-46eaa59c2f85" in namespace "emptydir-380" to be "Succeeded or Failed" May 11 01:07:32.594: INFO: Pod "pod-9f6fa648-a35d-4538-8285-46eaa59c2f85": Phase="Pending", Reason="", readiness=false. Elapsed: 23.07536ms May 11 01:07:34.610: INFO: Pod "pod-9f6fa648-a35d-4538-8285-46eaa59c2f85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038785605s May 11 01:07:36.614: INFO: Pod "pod-9f6fa648-a35d-4538-8285-46eaa59c2f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042914275s STEP: Saw pod success May 11 01:07:36.614: INFO: Pod "pod-9f6fa648-a35d-4538-8285-46eaa59c2f85" satisfied condition "Succeeded or Failed" May 11 01:07:36.618: INFO: Trying to get logs from node latest-worker pod pod-9f6fa648-a35d-4538-8285-46eaa59c2f85 container test-container: STEP: delete the pod May 11 01:07:36.663: INFO: Waiting for pod pod-9f6fa648-a35d-4538-8285-46eaa59c2f85 to disappear May 11 01:07:36.742: INFO: Pod pod-9f6fa648-a35d-4538-8285-46eaa59c2f85 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:36.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-380" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":4079,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:36.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ab75be36-fea3-4e3b-8661-07c821f87203 STEP: Creating a pod to test consume secrets May 11 01:07:36.808: INFO: Waiting up to 5m0s for pod "pod-secrets-5193d9ce-4247-4004-86bb-99234138f553" in namespace "secrets-3744" to be "Succeeded or Failed" May 11 01:07:36.824: INFO: Pod "pod-secrets-5193d9ce-4247-4004-86bb-99234138f553": Phase="Pending", Reason="", readiness=false. Elapsed: 16.002384ms May 11 01:07:38.898: INFO: Pod "pod-secrets-5193d9ce-4247-4004-86bb-99234138f553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090298826s May 11 01:07:40.902: INFO: Pod "pod-secrets-5193d9ce-4247-4004-86bb-99234138f553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09450824s STEP: Saw pod success May 11 01:07:40.902: INFO: Pod "pod-secrets-5193d9ce-4247-4004-86bb-99234138f553" satisfied condition "Succeeded or Failed" May 11 01:07:40.905: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5193d9ce-4247-4004-86bb-99234138f553 container secret-volume-test: STEP: delete the pod May 11 01:07:40.935: INFO: Waiting for pod pod-secrets-5193d9ce-4247-4004-86bb-99234138f553 to disappear May 11 01:07:40.956: INFO: Pod pod-secrets-5193d9ce-4247-4004-86bb-99234138f553 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:40.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3744" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":4082,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:40.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 01:07:41.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4797' May 11 01:07:41.184: INFO: stderr: "" May 11 01:07:41.184: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 11 01:07:41.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4797' May 11 01:07:54.849: INFO: stderr: "" May 11 01:07:54.849: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:07:54.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4797" for this suite. • [SLOW TEST:13.880 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":240,"skipped":4084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:07:54.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 01:08:03.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 01:08:03.083: INFO: Pod pod-with-prestop-exec-hook still exists May 11 01:08:05.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 01:08:05.088: INFO: Pod pod-with-prestop-exec-hook still exists May 11 01:08:07.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 01:08:07.087: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:08:07.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4661" for this suite. • [SLOW TEST:12.223 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":4118,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:08:07.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 01:08:07.856: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 01:08:09.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756087, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756087, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756088, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724756087, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 01:08:12.885: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:08:13.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4257" for this suite. STEP: Destroying namespace "webhook-4257-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.124 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":242,"skipped":4123,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:08:13.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 01:08:13.312: INFO: Waiting up to 5m0s for pod "downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52" in namespace "downward-api-9946" to be "Succeeded or Failed" May 11 01:08:13.348: INFO: Pod "downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52": Phase="Pending", Reason="", readiness=false. Elapsed: 35.951919ms May 11 01:08:15.365: INFO: Pod "downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052566405s May 11 01:08:17.369: INFO: Pod "downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056542436s STEP: Saw pod success May 11 01:08:17.369: INFO: Pod "downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52" satisfied condition "Succeeded or Failed" May 11 01:08:17.372: INFO: Trying to get logs from node latest-worker pod downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52 container dapi-container: STEP: delete the pod May 11 01:08:17.412: INFO: Waiting for pod downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52 to disappear May 11 01:08:17.419: INFO: Pod downward-api-7d6726f5-8b90-4306-8039-db5d6ae61c52 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:08:17.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9946" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":4127,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:08:17.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1048 STEP: creating service affinity-clusterip in namespace services-1048 STEP: creating replication controller affinity-clusterip in namespace services-1048 I0511 01:08:17.526921 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1048, replica count: 3 I0511 01:08:20.577620 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 01:08:23.577859 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 01:08:23.585: INFO: Creating new exec pod May 11 01:08:28.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1048 execpod-affinity6tqsw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 11 01:08:28.863: INFO: stderr: "I0511 01:08:28.776758 2532 log.go:172] (0xc00097f290) (0xc000b2a460) Create stream\nI0511 01:08:28.776814 2532 log.go:172] (0xc00097f290) (0xc000b2a460) Stream added, broadcasting: 1\nI0511 01:08:28.782215 2532 log.go:172] (0xc00097f290) Reply frame received for 1\nI0511 01:08:28.782260 2532 log.go:172] (0xc00097f290) (0xc0008540a0) Create stream\nI0511 01:08:28.782275 2532 log.go:172] (0xc00097f290) (0xc0008540a0) Stream added, broadcasting: 3\nI0511 01:08:28.783460 2532 log.go:172] (0xc00097f290) Reply frame received for 3\nI0511 01:08:28.783520 2532 log.go:172] (0xc00097f290) (0xc00081a780) Create stream\nI0511 01:08:28.783602 2532 log.go:172] (0xc00097f290) (0xc00081a780) Stream added, broadcasting: 5\nI0511 01:08:28.784539 2532 log.go:172] (0xc00097f290) Reply frame received for 5\nI0511 01:08:28.853787 2532 log.go:172] (0xc00097f290) Data frame received for 5\nI0511 01:08:28.853823 2532 log.go:172] (0xc00081a780) (5) Data frame handling\nI0511 01:08:28.853861 2532 log.go:172] (0xc00081a780) (5) Data frame sent\nI0511 01:08:28.853895 2532 log.go:172] (0xc00097f290) Data frame received for 5\nI0511 01:08:28.853917 2532 log.go:172] (0xc00081a780) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0511 01:08:28.853939 2532 log.go:172] (0xc00097f290) Data frame received for 3\nI0511 01:08:28.853964 2532 log.go:172] (0xc0008540a0) (3) Data frame handling\nI0511 01:08:28.854007 2532 log.go:172] (0xc00081a780) (5) Data frame sent\nI0511 01:08:28.854080 2532 log.go:172] (0xc00097f290) Data frame received for 5\nI0511 01:08:28.854092 2532 log.go:172] (0xc00081a780) (5) Data frame handling\nI0511 01:08:28.856218 2532 log.go:172] (0xc00097f290) Data frame received for 1\nI0511 01:08:28.856244 2532 log.go:172] (0xc000b2a460) (1) Data frame handling\nI0511 01:08:28.856260 2532 log.go:172] (0xc000b2a460) (1) Data frame sent\nI0511 01:08:28.856277 2532 log.go:172] (0xc00097f290) (0xc000b2a460) Stream removed, broadcasting: 1\nI0511 01:08:28.856298 2532 log.go:172] (0xc00097f290) Go away received\nI0511 01:08:28.856779 2532 log.go:172] (0xc00097f290) (0xc000b2a460) Stream removed, broadcasting: 1\nI0511 01:08:28.856812 2532 log.go:172] (0xc00097f290) (0xc0008540a0) Stream removed, broadcasting: 3\nI0511 01:08:28.856827 2532 log.go:172] (0xc00097f290) (0xc00081a780) Stream removed, broadcasting: 5\n" May 11 01:08:28.863: INFO: stdout: "" May 11 01:08:28.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1048 execpod-affinity6tqsw -- /bin/sh -x -c nc -zv -t -w 2 10.102.201.147 80' May 11 01:08:29.051: INFO: stderr: "I0511 01:08:28.995985 2554 log.go:172] (0xc0009808f0) (0xc00056cfa0) Create stream\nI0511 01:08:28.996041 2554 log.go:172] (0xc0009808f0) (0xc00056cfa0) Stream added, broadcasting: 1\nI0511 01:08:28.998446 2554 log.go:172] (0xc0009808f0) Reply frame received for 1\nI0511 01:08:28.998486 2554 log.go:172] (0xc0009808f0) (0xc000430280) Create stream\nI0511 01:08:28.998493 2554 log.go:172] (0xc0009808f0) (0xc000430280) Stream added, broadcasting: 3\nI0511 01:08:28.999266 2554 log.go:172] (0xc0009808f0) Reply frame received for 3\nI0511 01:08:28.999303 2554 log.go:172] (0xc0009808f0) (0xc0001b8dc0) Create stream\nI0511 01:08:28.999321 2554 log.go:172] (0xc0009808f0) (0xc0001b8dc0) Stream added, broadcasting: 5\nI0511 01:08:28.999983 2554 log.go:172] (0xc0009808f0) Reply frame received for 5\nI0511 01:08:29.044335 2554 log.go:172] (0xc0009808f0) Data frame received for 3\nI0511 01:08:29.044382 2554 log.go:172] (0xc000430280) (3) Data frame handling\nI0511 01:08:29.044478 2554 log.go:172] (0xc0009808f0) Data frame received for 5\nI0511 01:08:29.044501 2554 log.go:172] (0xc0001b8dc0) (5) Data frame handling\nI0511 01:08:29.044517 2554 log.go:172] (0xc0001b8dc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.102.201.147 80\nConnection to 10.102.201.147 80 port [tcp/http] succeeded!\nI0511 01:08:29.044525 2554 log.go:172] (0xc0009808f0) Data frame received for 5\nI0511 01:08:29.044544 2554 log.go:172] (0xc0001b8dc0) (5) Data frame handling\nI0511 01:08:29.045998 2554 log.go:172] (0xc0009808f0) Data frame received for 1\nI0511 01:08:29.046035 2554 log.go:172] (0xc00056cfa0) (1) Data frame handling\nI0511 01:08:29.046049 2554 log.go:172] (0xc00056cfa0) (1) Data frame sent\nI0511 01:08:29.046061 2554 log.go:172] (0xc0009808f0) (0xc00056cfa0) Stream removed, broadcasting: 1\nI0511 01:08:29.046087 2554 log.go:172] (0xc0009808f0) Go away received\nI0511 01:08:29.046424 2554 log.go:172] (0xc0009808f0) (0xc00056cfa0) Stream removed, broadcasting: 1\nI0511 01:08:29.046445 2554 log.go:172] (0xc0009808f0) (0xc000430280) Stream removed, broadcasting: 3\nI0511 01:08:29.046467 2554 log.go:172] (0xc0009808f0) (0xc0001b8dc0) Stream removed, broadcasting: 5\n" May 11 01:08:29.052: INFO: stdout: "" May 11 01:08:29.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1048 execpod-affinity6tqsw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.201.147:80/ ; done' May 11 01:08:29.359: INFO: stderr: "I0511 01:08:29.190968 2575 log.go:172] (0xc0005b7b80) (0xc0005421e0) Create stream\nI0511 01:08:29.191025 2575 log.go:172] (0xc0005b7b80) (0xc0005421e0) Stream added, broadcasting: 1\nI0511 01:08:29.197347 2575 log.go:172] (0xc0005b7b80) Reply frame received for 1\nI0511 01:08:29.197417 2575 log.go:172] (0xc0005b7b80) (0xc0006b45a0) Create stream\nI0511 01:08:29.197438 2575 log.go:172] (0xc0005b7b80) (0xc0006b45a0) Stream added, broadcasting: 3\nI0511 01:08:29.201370 2575 log.go:172] (0xc0005b7b80) Reply frame received for 3\nI0511 01:08:29.201444 2575 log.go:172] (0xc0005b7b80) (0xc000542d20) Create stream\nI0511 01:08:29.201474 2575 log.go:172] (0xc0005b7b80) (0xc000542d20) Stream added, broadcasting: 5\nI0511 01:08:29.202366 2575 log.go:172] (0xc0005b7b80) Reply frame received for 5\nI0511 01:08:29.270501 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.270546 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.270574 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.270596 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.270605 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.270625 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.276314 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.276333 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.276351 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.277310 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.277338 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.277354 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.277374 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.277388 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.277413 2575 log.go:172] (0xc000542d20) (5) Data frame sent\nI0511 01:08:29.277427 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.277438 2575 log.go:172] (0xc000542d20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.277467 2575 log.go:172] (0xc000542d20) (5) Data frame sent\nI0511 01:08:29.282655 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.282681 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.282720 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.283434 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.283466 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.283485 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ I0511 01:08:29.283532 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.283557 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.283569 2575 log.go:172] (0xc000542d20) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.283703 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.283730 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.283760 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.291350 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.291390 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.291419 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.291709 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.291740 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.291754 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.291780 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.291798 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.291818 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.295353 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.295374 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.295390 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.295721 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.295745 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.295783 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.295797 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.295814 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.295832 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.299218 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.299249 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.299263 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.299533 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.299552 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.299564 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.299579 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.299633 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.299659 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.303830 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.303852 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.303876 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.304512 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.304549 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.304565 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.304584 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.304595 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.304614 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.308497 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.308526 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.308547 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.309652 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.309673 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.309681 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.309692 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.309698 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.309707 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.314186 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.314217 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.314238 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.314524 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.314560 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.314576 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.314599 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.314618 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.314674 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.319582 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.319690 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.319785 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.319817 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.319832 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.319842 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.325444 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.325470 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.325490 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.325554 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.325586 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.325599 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.325610 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.325618 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.325639 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.328962 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.328983 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.329004 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.329850 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.329869 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.329878 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.329887 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.329892 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.329897 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.333604 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.333621 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.333647 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.334129 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.334149 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.334157 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.334174 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.334199 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.334214 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.337548 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.337569 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.337601 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.338009 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.338022 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.338028 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.338059 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.338087 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.338107 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.342064 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.342100 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.342118 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.342645 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.342688 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.342711 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.342752 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.342770 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.342801 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.346694 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.346711 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.346727 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.347401 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.347443 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.347470 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.347498 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.347518 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.347544 2575 log.go:172] (0xc000542d20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.201.147:80/\nI0511 01:08:29.351255 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.351290 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.351312 2575 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0511 01:08:29.351841 2575 log.go:172] (0xc0005b7b80) Data frame received for 5\nI0511 01:08:29.351876 2575 log.go:172] (0xc000542d20) (5) Data frame handling\nI0511 01:08:29.351896 2575 log.go:172] (0xc0005b7b80) Data frame received for 3\nI0511 01:08:29.351909 2575 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0511 01:08:29.354007 2575 log.go:172] (0xc0005b7b80) Data frame received for 1\nI0511 01:08:29.354043 2575 log.go:172] (0xc0005421e0) (1) Data frame handling\nI0511 01:08:29.354072 2575 log.go:172] (0xc0005421e0) (1) Data frame sent\nI0511 01:08:29.354099 2575 log.go:172] (0xc0005b7b80) (0xc0005421e0) Stream removed, broadcasting: 1\nI0511 01:08:29.354140 2575 log.go:172] (0xc0005b7b80) Go away received\nI0511 01:08:29.354584 2575 log.go:172] (0xc0005b7b80) (0xc0005421e0) Stream removed, broadcasting: 1\nI0511 01:08:29.354614 2575 log.go:172] (0xc0005b7b80) (0xc0006b45a0) Stream removed, broadcasting: 3\nI0511 01:08:29.354636 2575 log.go:172] (0xc0005b7b80) (0xc000542d20) Stream removed, broadcasting: 5\n" May 11 01:08:29.360: INFO: stdout: "\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z\naffinity-clusterip-xh47z" May 11 01:08:29.360: INFO: Received response from host: May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Received response from host: affinity-clusterip-xh47z May 11 01:08:29.360: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1048, will wait for the garbage collector to delete the pods May 11 01:08:29.655: INFO: Deleting ReplicationController affinity-clusterip took: 174.230659ms May 11 01:08:30.055: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.246259ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:08:45.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1048" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.928 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":244,"skipped":4133,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:08:45.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-1b2907fb-2a6d-46fe-bd7d-93f31d7eaea2 STEP: Creating a pod to test consume configMaps May 11 01:08:45.485: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b" in namespace "projected-9994" to be "Succeeded or Failed" May 11 01:08:45.519: INFO: Pod "pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.964868ms May 11 01:08:47.523: INFO: Pod "pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038005044s May 11 01:08:49.539: INFO: Pod "pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053969474s STEP: Saw pod success May 11 01:08:49.539: INFO: Pod "pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b" satisfied condition "Succeeded or Failed" May 11 01:08:49.542: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b container projected-configmap-volume-test: STEP: delete the pod May 11 01:08:49.575: INFO: Waiting for pod pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b to disappear May 11 01:08:49.583: INFO: Pod pod-projected-configmaps-47ae2e6e-b07c-4150-b682-627f09ef911b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:08:49.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9994" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":245,"skipped":4143,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:08:49.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:08:49.679: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 01:08:52.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1265 create -f -' May 11 01:08:58.318: INFO: stderr: "" May 11 01:08:58.318: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 01:08:58.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1265 delete e2e-test-crd-publish-openapi-6199-crds test-cr' May 11 01:08:58.442: INFO: stderr: "" May 11 01:08:58.442: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 11 01:08:58.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1265 apply -f -' May 11 01:08:58.711: INFO: stderr: "" May 11 01:08:58.711: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 01:08:58.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1265 delete e2e-test-crd-publish-openapi-6199-crds test-cr' May 11 01:08:58.867: INFO: stderr: "" May 11 01:08:58.867: INFO: stdout: "e2e-test-crd-publish-openapi-6199-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 01:08:58.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6199-crds' May 11 01:08:59.181: INFO: stderr: "" May 11 01:08:59.181: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6199-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:09:01.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1265" for this suite. • [SLOW TEST:11.533 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":246,"skipped":4145,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:09:01.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 11 01:09:01.280: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 11 01:09:01.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396' May 11 01:09:01.658: INFO: stderr: "" May 11 01:09:01.658: INFO: stdout: "service/agnhost-slave created\n" May 11 01:09:01.658: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 11 01:09:01.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396' May 11 01:09:01.971: INFO: stderr: "" May 11 01:09:01.971: INFO: stdout: "service/agnhost-master created\n" May 11 01:09:01.972: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 11 01:09:01.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396' May 11 01:09:02.262: INFO: stderr: "" May 11 01:09:02.262: INFO: stdout: "service/frontend created\n" May 11 01:09:02.262: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 11 01:09:02.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396' May 11 01:09:02.531: INFO: stderr: "" May 11 01:09:02.531: INFO: stdout: "deployment.apps/frontend created\n" May 11 01:09:02.531: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 01:09:02.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396' May 11 01:09:02.911: INFO: stderr: "" May 11 01:09:02.911: INFO: stdout: "deployment.apps/agnhost-master created\n" May 11 01:09:02.911: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 01:09:02.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396' May 11 01:09:03.268: INFO: stderr: "" May 11 01:09:03.268: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 11 01:09:03.268: INFO: Waiting for all frontend pods to be Running. May 11 01:09:13.319: INFO: Waiting for frontend to serve content. May 11 01:09:13.329: INFO: Trying to add a new entry to the guestbook. May 11 01:09:13.340: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 11 01:09:13.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396' May 11 01:09:13.529: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:09:13.529: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 11 01:09:13.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396' May 11 01:09:13.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:09:13.679: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 01:09:13.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396' May 11 01:09:13.823: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:09:13.823: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 01:09:13.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396' May 11 01:09:13.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:09:13.965: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 01:09:13.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396' May 11 01:09:14.119: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:09:14.119: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 01:09:14.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396' May 11 01:09:14.437: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:09:14.437: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:09:14.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1396" for this suite. • [SLOW TEST:13.589 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":247,"skipped":4167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:09:14.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-01e9401e-3afb-46ff-bce3-cda276fbaaa2 STEP: Creating a pod to test consume configMaps May 11 01:09:15.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991" in namespace "configmap-2242" to be "Succeeded or Failed" May 11 01:09:15.792: INFO: Pod "pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991": Phase="Pending", Reason="", readiness=false. Elapsed: 142.638104ms May 11 01:09:17.814: INFO: Pod "pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164863317s May 11 01:09:19.845: INFO: Pod "pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.195317461s STEP: Saw pod success May 11 01:09:19.845: INFO: Pod "pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991" satisfied condition "Succeeded or Failed" May 11 01:09:19.848: INFO: Trying to get logs from node latest-worker pod pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991 container configmap-volume-test: STEP: delete the pod May 11 01:09:20.166: INFO: Waiting for pod pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991 to disappear May 11 01:09:20.177: INFO: Pod pod-configmaps-60e8f8b7-8766-48f8-ab98-db597db30991 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:09:20.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2242" for this suite. • [SLOW TEST:5.475 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:09:20.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-cab2f052-540d-4c06-ba9c-f785dfb79f99 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:09:20.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9561" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":249,"skipped":4292,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:09:20.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-53328289-422b-4930-b6ae-b75106ddee8d STEP: Creating secret with name s-test-opt-upd-a9429f6d-b487-4224-8cd8-aa2d18703a08 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-53328289-422b-4930-b6ae-b75106ddee8d STEP: Updating secret s-test-opt-upd-a9429f6d-b487-4224-8cd8-aa2d18703a08 STEP: Creating secret with name s-test-opt-create-bc7344d3-4657-4532-9ba6-8fa5f895a085 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:09:28.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3927" for this suite. • [SLOW TEST:8.478 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4311,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:09:28.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:09:28.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4645" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":251,"skipped":4314,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:09:28.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9579.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.140.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.140.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.140.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.140.160_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9579.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.140.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.140.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.140.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.140.160_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 01:09:35.269: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:35.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:35.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:35.278: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:35.296: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:35.299: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:35.324: INFO: Lookups using dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local] May 11 01:09:40.329: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:40.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:40.354: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:40.356: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:40.376: INFO: Lookups using dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local] May 11 01:09:45.329: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:45.333: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:45.359: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:45.361: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:45.396: INFO: Lookups using dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local] May 11 01:09:50.330: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:50.335: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:50.362: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:50.365: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:50.390: INFO: Lookups using dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local] May 11 01:09:55.329: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:55.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:55.359: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:55.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:09:55.384: INFO: Lookups using dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local] May 11 01:10:00.330: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:10:00.334: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:10:00.364: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:10:00.367: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1: the server could not find the requested resource (get pods dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1) May 11 01:10:00.393: INFO: Lookups using dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local] May 11 01:10:05.393: INFO: DNS probes using dns-9579/dns-test-3ec7100a-0f6e-47df-a58e-21e2b61726e1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:10:06.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9579" for this suite. • [SLOW TEST:37.194 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":252,"skipped":4333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:10:06.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-e6603b1b-d689-47c8-bfa7-7680b960d22e STEP: Creating a pod to test consume secrets May 11 01:10:06.422: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766" in namespace "projected-5730" to be "Succeeded or Failed" May 11 01:10:06.432: INFO: Pod "pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766": Phase="Pending", Reason="", readiness=false. Elapsed: 9.367582ms May 11 01:10:10.139: INFO: Pod "pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716341695s May 11 01:10:12.143: INFO: Pod "pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766": Phase="Pending", Reason="", readiness=false. Elapsed: 5.720685681s May 11 01:10:14.147: INFO: Pod "pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.725166407s STEP: Saw pod success May 11 01:10:14.147: INFO: Pod "pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766" satisfied condition "Succeeded or Failed" May 11 01:10:14.151: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766 container projected-secret-volume-test: STEP: delete the pod May 11 01:10:14.188: INFO: Waiting for pod pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766 to disappear May 11 01:10:14.200: INFO: Pod pod-projected-secrets-12b20f0b-c502-45ce-a4f8-cc2ad7dbe766 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:10:14.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5730" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4357,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:10:14.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1049 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 01:10:14.310: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 01:10:14.475: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 01:10:16.479: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 01:10:18.480: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:10:20.480: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:10:22.479: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:10:24.479: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:10:26.480: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:10:28.492: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:10:30.479: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 01:10:30.484: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 01:10:34.646: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.173:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1049 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:10:34.646: INFO: >>> kubeConfig: /root/.kube/config I0511 01:10:34.681643 7 log.go:172] (0xc002c851e0) (0xc002837e00) Create stream I0511 01:10:34.681675 7 log.go:172] (0xc002c851e0) (0xc002837e00) Stream added, broadcasting: 1 I0511 01:10:34.683569 7 log.go:172] (0xc002c851e0) Reply frame received for 1 I0511 01:10:34.683593 7 log.go:172] (0xc002c851e0) (0xc002526be0) Create stream I0511 01:10:34.683602 7 log.go:172] (0xc002c851e0) (0xc002526be0) Stream added, broadcasting: 3 I0511 01:10:34.684528 7 log.go:172] (0xc002c851e0) Reply frame received for 3 I0511 01:10:34.684562 7 log.go:172] (0xc002c851e0) (0xc000fabc20) Create stream I0511 01:10:34.684571 7 log.go:172] (0xc002c851e0) (0xc000fabc20) Stream added, broadcasting: 5 I0511 01:10:34.685527 7 log.go:172] (0xc002c851e0) Reply frame received for 5 I0511 01:10:34.774184 7 log.go:172] (0xc002c851e0) Data frame received for 3 I0511 01:10:34.774218 7 log.go:172] (0xc002526be0) (3) Data frame handling I0511 01:10:34.774231 7 log.go:172] (0xc002526be0) (3) Data frame sent I0511 01:10:34.774239 7 log.go:172] (0xc002c851e0) Data frame received for 3 I0511 01:10:34.774255 7 log.go:172] (0xc002526be0) (3) Data frame handling I0511 01:10:34.774276 7 log.go:172] (0xc002c851e0) Data frame received for 5 I0511 01:10:34.774303 7 log.go:172] (0xc000fabc20) (5) Data frame handling I0511 01:10:34.776261 7 log.go:172] (0xc002c851e0) Data frame received for 1 I0511 01:10:34.776280 7 log.go:172] (0xc002837e00) (1) Data frame handling I0511 01:10:34.776295 7 log.go:172] (0xc002837e00) (1) Data frame sent I0511 01:10:34.776514 7 log.go:172] (0xc002c851e0) (0xc002837e00) Stream removed, broadcasting: 1 I0511 01:10:34.776611 7 log.go:172] (0xc002c851e0) Go away received I0511 01:10:34.776652 7 log.go:172] (0xc002c851e0) (0xc002837e00) Stream removed, broadcasting: 1 I0511 01:10:34.776686 7 log.go:172] (0xc002c851e0) (0xc002526be0) Stream removed, broadcasting: 3 I0511 01:10:34.776709 7 log.go:172] (0xc002c851e0) (0xc000fabc20) Stream removed, broadcasting: 5 May 11 01:10:34.776: INFO: Found all expected endpoints: [netserver-0] May 11 01:10:34.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.252:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1049 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:10:34.780: INFO: >>> kubeConfig: /root/.kube/config I0511 01:10:34.809383 7 log.go:172] (0xc002c85760) (0xc000c5c5a0) Create stream I0511 01:10:34.809421 7 log.go:172] (0xc002c85760) (0xc000c5c5a0) Stream added, broadcasting: 1 I0511 01:10:34.811601 7 log.go:172] (0xc002c85760) Reply frame received for 1 I0511 01:10:34.811641 7 log.go:172] (0xc002c85760) (0xc002aa8e60) Create stream I0511 01:10:34.811654 7 log.go:172] (0xc002c85760) (0xc002aa8e60) Stream added, broadcasting: 3 I0511 01:10:34.812491 7 log.go:172] (0xc002c85760) Reply frame received for 3 I0511 01:10:34.812528 7 log.go:172] (0xc002c85760) (0xc002aa8f00) Create stream I0511 01:10:34.812543 7 log.go:172] (0xc002c85760) (0xc002aa8f00) Stream added, broadcasting: 5 I0511 01:10:34.813770 7 log.go:172] (0xc002c85760) Reply frame received for 5 I0511 01:10:34.895212 7 log.go:172] (0xc002c85760) Data frame received for 3 I0511 01:10:34.895237 7 log.go:172] (0xc002aa8e60) (3) Data frame handling I0511 01:10:34.895260 7 log.go:172] (0xc002aa8e60) (3) Data frame sent I0511 01:10:34.895275 7 log.go:172] (0xc002c85760) Data frame received for 3 I0511 01:10:34.895288 7 log.go:172] (0xc002aa8e60) (3) Data frame handling I0511 01:10:34.895456 7 log.go:172] (0xc002c85760) Data frame received for 5 I0511 01:10:34.895473 7 log.go:172] (0xc002aa8f00) (5) Data frame handling I0511 01:10:34.896799 7 log.go:172] (0xc002c85760) Data frame received for 1 I0511 01:10:34.896865 7 log.go:172] (0xc000c5c5a0) (1) Data frame handling I0511 01:10:34.896940 7 log.go:172] (0xc000c5c5a0) (1) Data frame sent I0511 01:10:34.896975 7 log.go:172] (0xc002c85760) (0xc000c5c5a0) Stream removed, broadcasting: 1 I0511 01:10:34.897012 7 log.go:172] (0xc002c85760) Go away received I0511 01:10:34.897312 7 log.go:172] (0xc002c85760) (0xc000c5c5a0) Stream removed, broadcasting: 1 I0511 01:10:34.897331 7 log.go:172] (0xc002c85760) (0xc002aa8e60) Stream removed, broadcasting: 3 I0511 01:10:34.897340 7 log.go:172] (0xc002c85760) (0xc002aa8f00) Stream removed, broadcasting: 5 May 11 01:10:34.897: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:10:34.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1049" for this suite. • [SLOW TEST:20.698 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4363,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:10:34.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 01:10:35.005: INFO: Waiting up to 5m0s for pod "pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad" in namespace "emptydir-3277" to be "Succeeded or Failed" May 11 01:10:35.024: INFO: Pod "pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad": Phase="Pending", Reason="", readiness=false. Elapsed: 19.441051ms May 11 01:10:37.029: INFO: Pod "pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023972845s May 11 01:10:39.033: INFO: Pod "pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028630485s STEP: Saw pod success May 11 01:10:39.033: INFO: Pod "pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad" satisfied condition "Succeeded or Failed" May 11 01:10:39.036: INFO: Trying to get logs from node latest-worker2 pod pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad container test-container: STEP: delete the pod May 11 01:10:39.059: INFO: Waiting for pod pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad to disappear May 11 01:10:39.078: INFO: Pod pod-c53174a5-90bb-4b2e-a9a7-bd40323a38ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:10:39.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3277" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4373,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:10:39.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 11 01:10:39.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3994' May 11 01:10:39.634: INFO: stderr: "" May 11 01:10:39.634: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 01:10:39.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3994' May 11 01:10:39.784: INFO: stderr: "" May 11 01:10:39.784: INFO: stdout: "update-demo-nautilus-hkr57 update-demo-nautilus-jjkpn " May 11 01:10:39.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkr57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:39.879: INFO: stderr: "" May 11 01:10:39.879: INFO: stdout: "" May 11 01:10:39.879: INFO: update-demo-nautilus-hkr57 is created but not running May 11 01:10:44.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3994' May 11 01:10:44.979: INFO: stderr: "" May 11 01:10:44.979: INFO: stdout: "update-demo-nautilus-hkr57 update-demo-nautilus-jjkpn " May 11 01:10:44.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkr57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:45.071: INFO: stderr: "" May 11 01:10:45.071: INFO: stdout: "true" May 11 01:10:45.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkr57 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:45.195: INFO: stderr: "" May 11 01:10:45.195: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:10:45.195: INFO: validating pod update-demo-nautilus-hkr57 May 11 01:10:45.201: INFO: got data: { "image": "nautilus.jpg" } May 11 01:10:45.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:10:45.201: INFO: update-demo-nautilus-hkr57 is verified up and running May 11 01:10:45.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:45.304: INFO: stderr: "" May 11 01:10:45.304: INFO: stdout: "true" May 11 01:10:45.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:45.403: INFO: stderr: "" May 11 01:10:45.403: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:10:45.403: INFO: validating pod update-demo-nautilus-jjkpn May 11 01:10:45.407: INFO: got data: { "image": "nautilus.jpg" } May 11 01:10:45.407: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:10:45.407: INFO: update-demo-nautilus-jjkpn is verified up and running STEP: scaling down the replication controller May 11 01:10:45.466: INFO: scanned /root for discovery docs: May 11 01:10:45.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3994' May 11 01:10:46.610: INFO: stderr: "" May 11 01:10:46.610: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 01:10:46.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3994' May 11 01:10:46.854: INFO: stderr: "" May 11 01:10:46.854: INFO: stdout: "update-demo-nautilus-hkr57 update-demo-nautilus-jjkpn " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 01:10:51.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3994' May 11 01:10:51.966: INFO: stderr: "" May 11 01:10:51.966: INFO: stdout: "update-demo-nautilus-hkr57 update-demo-nautilus-jjkpn " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 01:10:56.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3994' May 11 01:10:57.078: INFO: stderr: "" May 11 01:10:57.078: INFO: stdout: "update-demo-nautilus-jjkpn " May 11 01:10:57.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:57.182: INFO: stderr: "" May 11 01:10:57.182: INFO: stdout: "true" May 11 01:10:57.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:57.293: INFO: stderr: "" May 11 01:10:57.293: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:10:57.293: INFO: validating pod update-demo-nautilus-jjkpn May 11 01:10:57.297: INFO: got data: { "image": "nautilus.jpg" } May 11 01:10:57.297: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:10:57.297: INFO: update-demo-nautilus-jjkpn is verified up and running STEP: scaling up the replication controller May 11 01:10:57.298: INFO: scanned /root for discovery docs: May 11 01:10:57.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3994' May 11 01:10:58.528: INFO: stderr: "" May 11 01:10:58.528: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 01:10:58.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3994' May 11 01:10:58.661: INFO: stderr: "" May 11 01:10:58.661: INFO: stdout: "update-demo-nautilus-jjkpn update-demo-nautilus-kbcph " May 11 01:10:58.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:58.776: INFO: stderr: "" May 11 01:10:58.776: INFO: stdout: "true" May 11 01:10:58.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:59.124: INFO: stderr: "" May 11 01:10:59.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:10:59.124: INFO: validating pod update-demo-nautilus-jjkpn May 11 01:10:59.271: INFO: got data: { "image": "nautilus.jpg" } May 11 01:10:59.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:10:59.271: INFO: update-demo-nautilus-jjkpn is verified up and running May 11 01:10:59.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbcph -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:10:59.374: INFO: stderr: "" May 11 01:10:59.374: INFO: stdout: "" May 11 01:10:59.374: INFO: update-demo-nautilus-kbcph is created but not running May 11 01:11:04.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3994' May 11 01:11:04.475: INFO: stderr: "" May 11 01:11:04.475: INFO: stdout: "update-demo-nautilus-jjkpn update-demo-nautilus-kbcph " May 11 01:11:04.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:11:04.572: INFO: stderr: "" May 11 01:11:04.572: INFO: stdout: "true" May 11 01:11:04.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjkpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:11:04.675: INFO: stderr: "" May 11 01:11:04.675: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:11:04.675: INFO: validating pod update-demo-nautilus-jjkpn May 11 01:11:04.678: INFO: got data: { "image": "nautilus.jpg" } May 11 01:11:04.678: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:11:04.678: INFO: update-demo-nautilus-jjkpn is verified up and running May 11 01:11:04.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbcph -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:11:04.775: INFO: stderr: "" May 11 01:11:04.775: INFO: stdout: "true" May 11 01:11:04.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbcph -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3994' May 11 01:11:04.864: INFO: stderr: "" May 11 01:11:04.864: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:11:04.864: INFO: validating pod update-demo-nautilus-kbcph May 11 01:11:04.867: INFO: got data: { "image": "nautilus.jpg" } May 11 01:11:04.867: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:11:04.867: INFO: update-demo-nautilus-kbcph is verified up and running STEP: using delete to clean up resources May 11 01:11:04.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3994' May 11 01:11:04.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:11:04.995: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 01:11:04.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3994' May 11 01:11:05.085: INFO: stderr: "No resources found in kubectl-3994 namespace.\n" May 11 01:11:05.085: INFO: stdout: "" May 11 01:11:05.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3994 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 01:11:05.192: INFO: stderr: "" May 11 01:11:05.192: INFO: stdout: "update-demo-nautilus-jjkpn\nupdate-demo-nautilus-kbcph\n" May 11 01:11:05.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3994' May 11 01:11:05.806: INFO: stderr: "No resources found in kubectl-3994 namespace.\n" May 11 01:11:05.806: INFO: stdout: "" May 11 01:11:05.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3994 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 01:11:05.926: INFO: stderr: "" May 11 01:11:05.926: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:11:05.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3994" for this suite. • [SLOW TEST:26.848 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":256,"skipped":4383,"failed":0} SS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:11:05.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 11 01:11:06.243: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 11 01:11:06.248: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 11 01:11:06.248: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 11 01:11:06.285: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 11 01:11:06.285: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 11 01:11:06.379: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 11 01:11:06.379: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 11 01:11:13.794: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:11:13.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-100" for this suite. • [SLOW TEST:7.936 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":257,"skipped":4385,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:11:13.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:11:13.957: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:11:14.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9556" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":258,"skipped":4407,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:11:14.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:11:14.711: INFO: Creating ReplicaSet my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a May 11 01:11:14.726: INFO: Pod name my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a: Found 0 pods out of 1 May 11 01:11:19.770: INFO: Pod name my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a: Found 1 pods out of 1 May 11 01:11:19.770: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a" is running May 11 01:11:19.900: INFO: Pod "my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a-6p7lh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 01:11:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 01:11:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 01:11:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 01:11:14 +0000 UTC Reason: Message:}]) May 11 01:11:19.901: INFO: Trying to dial the pod May 11 01:11:24.909: INFO: Controller my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a: Got expected result from replica 1 [my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a-6p7lh]: "my-hostname-basic-ba846eec-e185-4678-8afa-4b1f4d52581a-6p7lh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:11:24.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3090" for this suite. • [SLOW TEST:10.312 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":259,"skipped":4410,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:11:24.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 01:11:24.989: INFO: Waiting up to 5m0s for pod "pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d" in namespace "emptydir-9216" to be "Succeeded or Failed" May 11 01:11:25.024: INFO: Pod "pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.782529ms May 11 01:11:27.027: INFO: Pod "pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038555989s May 11 01:11:29.032: INFO: Pod "pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042887909s STEP: Saw pod success May 11 01:11:29.032: INFO: Pod "pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d" satisfied condition "Succeeded or Failed" May 11 01:11:29.035: INFO: Trying to get logs from node latest-worker pod pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d container test-container: STEP: delete the pod May 11 01:11:29.083: INFO: Waiting for pod pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d to disappear May 11 01:11:29.098: INFO: Pod pod-8a00776f-520b-47d3-bfcd-f96bb7d0343d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:11:29.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9216" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":260,"skipped":4410,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:11:29.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:11:29.192: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 11 01:11:31.287: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:11:32.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1095" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":261,"skipped":4429,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:11:32.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 01:11:33.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337" in namespace "downward-api-8989" to be "Succeeded or Failed" May 11 01:11:33.319: INFO: Pod "downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337": Phase="Pending", Reason="", readiness=false. Elapsed: 166.293693ms May 11 01:11:35.427: INFO: Pod "downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274437248s May 11 01:11:37.492: INFO: Pod "downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.339882767s STEP: Saw pod success May 11 01:11:37.492: INFO: Pod "downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337" satisfied condition "Succeeded or Failed" May 11 01:11:37.545: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337 container client-container: STEP: delete the pod May 11 01:11:38.022: INFO: Waiting for pod downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337 to disappear May 11 01:11:38.087: INFO: Pod downwardapi-volume-8b56ec74-0c34-45a1-962b-8a1c8a21a337 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:11:38.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8989" for this suite. • [SLOW TEST:5.718 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":262,"skipped":4433,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:11:38.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:13:38.496: INFO: Deleting pod "var-expansion-7a9a8a8d-0ebd-439f-b692-171ffc8bca81" in namespace "var-expansion-1172" May 11 01:13:38.501: INFO: Wait up to 5m0s for pod "var-expansion-7a9a8a8d-0ebd-439f-b692-171ffc8bca81" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:13:40.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1172" for this suite. • [SLOW TEST:122.471 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":263,"skipped":4443,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:13:40.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 01:13:50.712: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:13:50.737: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:13:52.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:13:52.742: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:13:54.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:13:54.742: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:13:56.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:13:56.742: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:13:58.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:13:58.742: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:14:00.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:14:00.742: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:14:02.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:14:02.742: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:14:04.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:14:04.742: INFO: Pod pod-with-prestop-http-hook still exists May 11 01:14:06.737: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 01:14:06.742: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:14:06.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3254" for this suite. • [SLOW TEST:26.206 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4457,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:14:06.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:14:06.848: INFO: Creating deployment "webserver-deployment" May 11 01:14:06.854: INFO: Waiting for observed generation 1 May 11 01:14:08.876: INFO: Waiting for all required pods to come up May 11 01:14:08.882: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 11 01:14:18.974: INFO: Waiting for deployment "webserver-deployment" to complete May 11 01:14:18.978: INFO: Updating deployment "webserver-deployment" with a non-existent image May 11 01:14:18.984: INFO: Updating deployment webserver-deployment May 11 01:14:18.984: INFO: Waiting for observed generation 2 May 11 01:14:21.098: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 11 01:14:21.103: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 11 01:14:21.109: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 01:14:21.116: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 11 01:14:21.116: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 11 01:14:21.118: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 01:14:21.121: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 11 01:14:21.122: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 11 01:14:21.126: INFO: Updating deployment webserver-deployment May 11 01:14:21.126: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 11 01:14:21.198: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 11 01:14:21.363: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 01:14:21.693: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4589 /apis/apps/v1/namespaces/deployment-4589/deployments/webserver-deployment 39d18314-31e2-4621-afb5-d663cd93d477 3233459 3 2020-05-11 01:14:06 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002990e28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-11 01:14:19 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 01:14:21 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 11 01:14:21.818: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-4589 /apis/apps/v1/namespaces/deployment-4589/replicasets/webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 3233507 3 2020-05-11 01:14:18 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 39d18314-31e2-4621-afb5-d663cd93d477 0xc0029912c7 0xc0029912c8}] [] [{kube-controller-manager Update apps/v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39d18314-31e2-4621-afb5-d663cd93d477\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002991348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 01:14:21.818: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 11 01:14:21.818: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-4589 /apis/apps/v1/namespaces/deployment-4589/replicasets/webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 3233506 3 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 39d18314-31e2-4621-afb5-d663cd93d477 0xc0029913a7 0xc0029913a8}] [] [{kube-controller-manager Update apps/v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39d18314-31e2-4621-afb5-d663cd93d477\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002991418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 11 01:14:21.891: INFO: Pod "webserver-deployment-6676bcd6d4-2p5h7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2p5h7 webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-2p5h7 fea99ff0-1ca4-4092-a9c6-4d57a600fdff 3233500 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b62637 0xc004b62638}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.891: INFO: Pod "webserver-deployment-6676bcd6d4-4hss8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4hss8 webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-4hss8 df7fe438-0284-41a5-81c2-704c3b5a6d6a 3233422 0 2020-05-11 01:14:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b62777 0xc004b62778}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 01:14:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.891: INFO: Pod "webserver-deployment-6676bcd6d4-7jl79" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7jl79 webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-7jl79 099fe302-ac4f-42ba-a368-a65203b00e76 3233513 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b62927 0xc004b62928}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.892: INFO: Pod "webserver-deployment-6676bcd6d4-9lcq8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9lcq8 webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-9lcq8 9f24c8c2-d3d8-4dd3-994e-4648a73155c3 3233473 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b62a67 0xc004b62a68}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.892: INFO: Pod "webserver-deployment-6676bcd6d4-bjwjg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bjwjg webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-bjwjg 900ea76b-860e-4358-a617-4f9bb4d84ff3 3233443 0 2020-05-11 01:14:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b62ba7 0xc004b62ba8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-11 01:14:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.892: INFO: Pod "webserver-deployment-6676bcd6d4-c5gzs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-c5gzs webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-c5gzs d80f47ed-0e3b-4530-87b3-53b44beaa7a8 3233411 0 2020-05-11 01:14:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b62d57 0xc004b62d58}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 01:14:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.893: INFO: Pod "webserver-deployment-6676bcd6d4-gs44z" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gs44z webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-gs44z 845fddd8-cc9e-453f-ba94-1b3be0003538 3233440 0 2020-05-11 01:14:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b62f07 0xc004b62f08}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-11 01:14:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.893: INFO: Pod "webserver-deployment-6676bcd6d4-qkhst" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qkhst webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-qkhst cd8d6f11-1a8e-4b9a-8c6c-49bd1561d693 3233479 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b630b7 0xc004b630b8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.893: INFO: Pod "webserver-deployment-6676bcd6d4-qz2lm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qz2lm webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-qz2lm 94ed3f7a-c687-49f7-b111-49665041a390 3233483 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b631f7 0xc004b631f8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.894: INFO: Pod "webserver-deployment-6676bcd6d4-rrv99" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rrv99 webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-rrv99 1b264760-3d6d-4b12-87a9-2e35f1f0aa32 3233423 0 2020-05-11 01:14:19 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b63347 0xc004b63348}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-11 01:14:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.894: INFO: Pod "webserver-deployment-6676bcd6d4-tkf29" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tkf29 webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-tkf29 4472792a-cd7e-4e4c-b98f-92af470b2115 3233487 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b634f7 0xc004b634f8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.894: INFO: Pod "webserver-deployment-6676bcd6d4-tzgzn" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tzgzn webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-tzgzn 04e2f444-1d27-4caa-ac49-0a920ca69bfc 3233485 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b63637 0xc004b63638}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.894: INFO: Pod "webserver-deployment-6676bcd6d4-vkwhz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vkwhz webserver-deployment-6676bcd6d4- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-6676bcd6d4-vkwhz da53fc5f-4ee6-4674-af8d-a6786d1a38a5 3233464 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b859215d-d0e9-46d3-9fe5-a07e931a9c43 0xc004b63777 0xc004b63778}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b859215d-d0e9-46d3-9fe5-a07e931a9c43\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.894: INFO: Pod "webserver-deployment-84855cf797-44fd8" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-44fd8 webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-44fd8 23f74943-ee33-4086-b955-44b6c7c05c76 3233517 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc004b638b7 0xc004b638b8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.895: INFO: Pod "webserver-deployment-84855cf797-59wh9" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-59wh9 webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-59wh9 902cef23-c6a9-4290-b781-c5f55871bab8 3233511 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc004b639e7 0xc004b639e8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 01:14:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.895: INFO: Pod "webserver-deployment-84855cf797-5ld68" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5ld68 webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-5ld68 ef451f7d-ac1e-49f0-b82f-7fc36c0305c7 3233499 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc004b63b77 0xc004b63b78}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.895: INFO: Pod "webserver-deployment-84855cf797-6phkj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6phkj webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-6phkj 1776d90c-c2c3-465a-8c10-9c7d1cff2bfe 3233486 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc004b63ca7 0xc004b63ca8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.895: INFO: Pod "webserver-deployment-84855cf797-6x79t" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6x79t webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-6x79t 0ed4a7ac-0731-4f68-b139-d86dd28af11e 3233350 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc004b63dd7 0xc004b63dd8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.11,StartTime:2020-05-11 01:14:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5dd8f1ab89fcb38b5c285052f2241a0090058cc6f5f2cca8181fd62150459a3f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.895: INFO: Pod "webserver-deployment-84855cf797-7xn45" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7xn45 webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-7xn45 ad8eef88-d5b3-4793-9290-c05389c33439 3233338 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc004b63f87 0xc004b63f88}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.183,StartTime:2020-05-11 01:14:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://07f508819fa2865d38d92bde5de4f031dad7364a3ae4fdf0d0d38c0ee6461081,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.896: INFO: Pod "webserver-deployment-84855cf797-cdpml" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cdpml webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-cdpml 161ddf17-7ba8-4da5-815a-c5692cfcf9f6 3233514 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292c137 0xc00292c138}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.896: INFO: Pod "webserver-deployment-84855cf797-cmlwr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cmlwr webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-cmlwr 5d95e04c-3191-4e23-9c2b-eb01218d6135 3233526 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292c267 0xc00292c268}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-11 01:14:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.896: INFO: Pod "webserver-deployment-84855cf797-cmpnv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cmpnv webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-cmpnv 48144f9d-6dd4-481d-87c9-79043261a431 3233498 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292c3f7 0xc00292c3f8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.896: INFO: Pod "webserver-deployment-84855cf797-fm4mk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fm4mk webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-fm4mk fce12d2b-fe01-4f32-b2e6-31c236813002 3233357 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292c527 0xc00292c528}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.10,StartTime:2020-05-11 01:14:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9f2d3238f926c9238bf3e37fc4446a278424ea8bcd1fd5d92660b665a111a896,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.896: INFO: Pod "webserver-deployment-84855cf797-gj4tw" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gj4tw webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-gj4tw b3bf2c6a-7985-47a4-b88f-6a5c1eabcb24 3233369 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292c6f7 0xc00292c6f8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.184,StartTime:2020-05-11 01:14:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eae3d407f0c2a06e921681c45188ad56d912d9bd3d691e4b3fbbe5672aa5e1a4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.896: INFO: Pod "webserver-deployment-84855cf797-jp9xr" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jp9xr webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-jp9xr 293cdace-3e2c-4413-8962-9ccacb133a58 3233363 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292c8a7 0xc00292c8a8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.12,StartTime:2020-05-11 01:14:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://52d55fc5fbbc549de1a0e30cc0dce8b229e1a4f141fc349b7022282cb4c12a52,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.897: INFO: Pod "webserver-deployment-84855cf797-lzj7v" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lzj7v webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-lzj7v ba08a9d2-3a98-4eff-9754-7d687f6e7207 3233375 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292ca57 0xc00292ca58}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.185\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.185,StartTime:2020-05-11 01:14:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://76e9bdbd2b4a1754acef33fdd567b2df81bd1a6f0935c5529fc7d29a8446427f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.897: INFO: Pod "webserver-deployment-84855cf797-m9pr2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m9pr2 webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-m9pr2 005cf844-1646-42a0-bc0e-de603f6d43f2 3233516 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292cc17 0xc00292cc18}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.897: INFO: Pod "webserver-deployment-84855cf797-r9gsn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-r9gsn webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-r9gsn 34de3ed2-e5f8-4d49-b545-73cff9fd8162 3233515 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292cd47 0xc00292cd48}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.897: INFO: Pod "webserver-deployment-84855cf797-tq66s" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tq66s webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-tq66s e4704fcd-2b9c-4084-8468-1bb0ba0ad633 3233318 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292ce77 0xc00292ce78}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.9,StartTime:2020-05-11 01:14:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f5c7418fb9868575051c3b41c862a3eefc12dbba4e831de1cc413ced8d47792f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.898: INFO: Pod "webserver-deployment-84855cf797-vwqk4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vwqk4 webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-vwqk4 a556b5d0-ef34-4709-9c96-ba1d3409ae05 3233344 0 2020-05-11 01:14:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292d027 0xc00292d028}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 01:14:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.182\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.182,StartTime:2020-05-11 01:14:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 01:14:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://604b339b3e97cb876b4d0b8230807cc0271bd54de70c35c8bef0b5695f891cf4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.898: INFO: Pod "webserver-deployment-84855cf797-x98j6" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-x98j6 webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-x98j6 4386de6a-d924-4180-b6b3-9a095b98ba8c 3233477 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292d1d7 0xc00292d1d8}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.898: INFO: Pod "webserver-deployment-84855cf797-xls7t" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xls7t webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-xls7t a7156f47-e7c0-4354-b4bb-c93628a3514c 3233512 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292d307 0xc00292d308}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 01:14:21.898: INFO: Pod "webserver-deployment-84855cf797-zg66m" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zg66m webserver-deployment-84855cf797- deployment-4589 /api/v1/namespaces/deployment-4589/pods/webserver-deployment-84855cf797-zg66m 0f2e33a2-5a1e-497d-bd4b-f788cccc82ef 3233488 0 2020-05-11 01:14:21 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 012f416a-fe0b-4c09-bdbe-b23df03b4053 0xc00292d437 0xc00292d438}] [] [{kube-controller-manager Update v1 2020-05-11 01:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012f416a-fe0b-4c09-bdbe-b23df03b4053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qh9ff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qh9ff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qh9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 01:14:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:14:21.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4589" for this suite. • [SLOW TEST:15.338 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":265,"skipped":4458,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:14:22.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 01:14:22.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432" in namespace "projected-1970" to be "Succeeded or Failed" May 11 01:14:22.813: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Pending", Reason="", readiness=false. Elapsed: 196.508881ms May 11 01:14:25.240: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.623854221s May 11 01:14:27.266: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649798808s May 11 01:14:29.776: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Pending", Reason="", readiness=false. Elapsed: 7.159817562s May 11 01:14:32.248: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Pending", Reason="", readiness=false. Elapsed: 9.631308219s May 11 01:14:34.673: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0568332s May 11 01:14:36.996: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Pending", Reason="", readiness=false. Elapsed: 14.379965599s May 11 01:14:39.009: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.392711328s STEP: Saw pod success May 11 01:14:39.009: INFO: Pod "downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432" satisfied condition "Succeeded or Failed" May 11 01:14:39.015: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432 container client-container: STEP: delete the pod May 11 01:14:39.092: INFO: Waiting for pod downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432 to disappear May 11 01:14:39.096: INFO: Pod downwardapi-volume-4efbbd01-54ec-4a81-a493-249f8e6ac432 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:14:39.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1970" for this suite. • [SLOW TEST:16.997 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":266,"skipped":4465,"failed":0} [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:14:39.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:14:39.182: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:14:45.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9330" for this suite. • [SLOW TEST:6.667 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:14:45.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 01:14:46.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288" in namespace "downward-api-8869" to be "Succeeded or Failed" May 11 01:14:47.284: INFO: Pod "downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288": Phase="Pending", Reason="", readiness=false. Elapsed: 478.640187ms May 11 01:14:49.631: INFO: Pod "downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.825454974s May 11 01:14:52.364: INFO: Pod "downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288": Phase="Pending", Reason="", readiness=false. Elapsed: 5.558191098s May 11 01:14:54.594: INFO: Pod "downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288": Phase="Running", Reason="", readiness=true. Elapsed: 7.78852239s May 11 01:14:56.788: INFO: Pod "downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.982218254s STEP: Saw pod success May 11 01:14:56.788: INFO: Pod "downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288" satisfied condition "Succeeded or Failed" May 11 01:14:56.792: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288 container client-container: STEP: delete the pod May 11 01:14:57.327: INFO: Waiting for pod downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288 to disappear May 11 01:14:57.524: INFO: Pod downwardapi-volume-3fed99ed-6a36-4632-a380-27a19a259288 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:14:57.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8869" for this suite. • [SLOW TEST:11.920 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4499,"failed":0} [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:14:57.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 11 01:14:58.055: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix082302162/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:14:58.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8826" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":269,"skipped":4499,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:14:58.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-57bf8379-b048-43d4-b277-acef2944d914 STEP: Creating a pod to test consume configMaps May 11 01:14:58.526: INFO: Waiting up to 5m0s for pod "pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42" in namespace "configmap-3852" to be "Succeeded or Failed" May 11 01:14:58.721: INFO: Pod "pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42": Phase="Pending", Reason="", readiness=false. Elapsed: 195.644304ms May 11 01:15:00.726: INFO: Pod "pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199864629s May 11 01:15:02.729: INFO: Pod "pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202870654s STEP: Saw pod success May 11 01:15:02.729: INFO: Pod "pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42" satisfied condition "Succeeded or Failed" May 11 01:15:02.733: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42 container configmap-volume-test: STEP: delete the pod May 11 01:15:02.791: INFO: Waiting for pod pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42 to disappear May 11 01:15:02.805: INFO: Pod pod-configmaps-97facdd1-e84b-4e7b-8bb1-582e33b68e42 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:15:02.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3852" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4511,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:15:02.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:16:02.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9319" for this suite. • [SLOW TEST:60.124 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":271,"skipped":4518,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:16:02.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 in namespace container-probe-3944 May 11 01:16:07.089: INFO: Started pod liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 in namespace container-probe-3944 STEP: checking the pod's current state and verifying that restartCount is present May 11 01:16:07.092: INFO: Initial restart count of pod liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 is 0 May 11 01:16:23.129: INFO: Restart count of pod container-probe-3944/liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 is now 1 (16.037503166s elapsed) May 11 01:16:43.172: INFO: Restart count of pod container-probe-3944/liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 is now 2 (36.079685304s elapsed) May 11 01:17:03.215: INFO: Restart count of pod container-probe-3944/liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 is now 3 (56.12306906s elapsed) May 11 01:17:23.256: INFO: Restart count of pod container-probe-3944/liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 is now 4 (1m16.164503862s elapsed) May 11 01:18:33.447: INFO: Restart count of pod container-probe-3944/liveness-9484f83c-95bb-4484-9b52-2a62548f3fe6 is now 5 (2m26.355117632s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:18:33.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3944" for this suite. • [SLOW TEST:150.558 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4524,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:18:33.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 01:18:33.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c" in namespace "downward-api-9170" to be "Succeeded or Failed" May 11 01:18:33.776: INFO: Pod "downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c": Phase="Pending", Reason="", readiness=false. Elapsed: 130.638001ms May 11 01:18:35.781: INFO: Pod "downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135603342s May 11 01:18:37.785: INFO: Pod "downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139213679s STEP: Saw pod success May 11 01:18:37.785: INFO: Pod "downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c" satisfied condition "Succeeded or Failed" May 11 01:18:37.787: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c container client-container: STEP: delete the pod May 11 01:18:37.846: INFO: Waiting for pod downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c to disappear May 11 01:18:37.864: INFO: Pod downwardapi-volume-76837e83-9082-4a71-8163-cd43f955b71c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:18:37.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9170" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:18:37.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 11 01:18:37.953: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:18:38.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6701" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":274,"skipped":4559,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:18:38.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 01:18:42.734: INFO: Successfully updated pod "pod-update-activedeadlineseconds-54947623-ce71-4543-a418-a9fb664db2c9" May 11 01:18:42.734: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-54947623-ce71-4543-a418-a9fb664db2c9" in namespace "pods-7646" to be "terminated due to deadline exceeded" May 11 01:18:42.756: INFO: Pod "pod-update-activedeadlineseconds-54947623-ce71-4543-a418-a9fb664db2c9": Phase="Running", Reason="", readiness=true. Elapsed: 22.543219ms May 11 01:18:44.830: INFO: Pod "pod-update-activedeadlineseconds-54947623-ce71-4543-a418-a9fb664db2c9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.096251917s May 11 01:18:44.830: INFO: Pod "pod-update-activedeadlineseconds-54947623-ce71-4543-a418-a9fb664db2c9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:18:44.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7646" for this suite. • [SLOW TEST:6.783 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4575,"failed":0} SSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:18:44.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3970 May 11 01:18:47.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 11 01:18:47.279: INFO: stderr: "I0511 01:18:47.190519 3550 log.go:172] (0xc0006c2420) (0xc0006f2e60) Create stream\nI0511 01:18:47.190573 3550 log.go:172] (0xc0006c2420) (0xc0006f2e60) Stream added, broadcasting: 1\nI0511 01:18:47.192914 3550 log.go:172] (0xc0006c2420) Reply frame received for 1\nI0511 01:18:47.192960 3550 log.go:172] (0xc0006c2420) (0xc000a92000) Create stream\nI0511 01:18:47.192977 3550 log.go:172] (0xc0006c2420) (0xc000a92000) Stream added, broadcasting: 3\nI0511 01:18:47.194203 3550 log.go:172] (0xc0006c2420) Reply frame received for 3\nI0511 01:18:47.194270 3550 log.go:172] (0xc0006c2420) (0xc0005406e0) Create stream\nI0511 01:18:47.194294 3550 log.go:172] (0xc0006c2420) (0xc0005406e0) Stream added, broadcasting: 5\nI0511 01:18:47.195462 3550 log.go:172] (0xc0006c2420) Reply frame received for 5\nI0511 01:18:47.265515 3550 log.go:172] (0xc0006c2420) Data frame received for 5\nI0511 01:18:47.265543 3550 log.go:172] (0xc0005406e0) (5) Data frame handling\nI0511 01:18:47.265559 3550 log.go:172] (0xc0005406e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0511 01:18:47.270833 3550 log.go:172] (0xc0006c2420) Data frame received for 3\nI0511 01:18:47.270852 3550 log.go:172] (0xc000a92000) (3) Data frame handling\nI0511 01:18:47.270862 3550 log.go:172] (0xc000a92000) (3) Data frame sent\nI0511 01:18:47.271567 3550 log.go:172] (0xc0006c2420) Data frame received for 3\nI0511 01:18:47.271591 3550 log.go:172] (0xc000a92000) (3) Data frame handling\nI0511 01:18:47.271611 3550 log.go:172] (0xc0006c2420) Data frame received for 5\nI0511 01:18:47.271623 3550 log.go:172] (0xc0005406e0) (5) Data frame handling\nI0511 01:18:47.274061 3550 log.go:172] (0xc0006c2420) Data frame received for 1\nI0511 01:18:47.274080 3550 log.go:172] (0xc0006f2e60) (1) Data frame handling\nI0511 01:18:47.274099 3550 log.go:172] (0xc0006f2e60) (1) Data frame sent\nI0511 01:18:47.274115 3550 log.go:172] (0xc0006c2420) (0xc0006f2e60) Stream removed, broadcasting: 1\nI0511 01:18:47.274138 3550 log.go:172] (0xc0006c2420) Go away received\nI0511 01:18:47.274466 3550 log.go:172] (0xc0006c2420) (0xc0006f2e60) Stream removed, broadcasting: 1\nI0511 01:18:47.274494 3550 log.go:172] (0xc0006c2420) (0xc000a92000) Stream removed, broadcasting: 3\nI0511 01:18:47.274504 3550 log.go:172] (0xc0006c2420) (0xc0005406e0) Stream removed, broadcasting: 5\n" May 11 01:18:47.279: INFO: stdout: "iptables" May 11 01:18:47.279: INFO: proxyMode: iptables May 11 01:18:47.285: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 01:18:47.318: INFO: Pod kube-proxy-mode-detector still exists May 11 01:18:49.318: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 01:18:49.321: INFO: Pod kube-proxy-mode-detector still exists May 11 01:18:51.318: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 01:18:51.322: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-3970 STEP: creating replication controller affinity-nodeport-timeout in namespace services-3970 I0511 01:18:51.435562 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3970, replica count: 3 I0511 01:18:54.485986 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 01:18:57.486186 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 01:18:57.494: INFO: Creating new exec pod May 11 01:19:02.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 execpod-affinitybmmhd -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 11 01:19:05.370: INFO: stderr: "I0511 01:19:05.295018 3570 log.go:172] (0xc0000e8dc0) (0xc0003f6280) Create stream\nI0511 01:19:05.295055 3570 log.go:172] (0xc0000e8dc0) (0xc0003f6280) Stream added, broadcasting: 1\nI0511 01:19:05.305710 3570 log.go:172] (0xc0000e8dc0) Reply frame received for 1\nI0511 01:19:05.305757 3570 log.go:172] (0xc0000e8dc0) (0xc000550500) Create stream\nI0511 01:19:05.305769 3570 log.go:172] (0xc0000e8dc0) (0xc000550500) Stream added, broadcasting: 3\nI0511 01:19:05.306997 3570 log.go:172] (0xc0000e8dc0) Reply frame received for 3\nI0511 01:19:05.307022 3570 log.go:172] (0xc0000e8dc0) (0xc0005a8460) Create stream\nI0511 01:19:05.307029 3570 log.go:172] (0xc0000e8dc0) (0xc0005a8460) Stream added, broadcasting: 5\nI0511 01:19:05.307768 3570 log.go:172] (0xc0000e8dc0) Reply frame received for 5\nI0511 01:19:05.360486 3570 log.go:172] (0xc0000e8dc0) Data frame received for 5\nI0511 01:19:05.360524 3570 log.go:172] (0xc0005a8460) (5) Data frame handling\nI0511 01:19:05.360549 3570 log.go:172] (0xc0005a8460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0511 01:19:05.360917 3570 log.go:172] (0xc0000e8dc0) Data frame received for 5\nI0511 01:19:05.360944 3570 log.go:172] (0xc0005a8460) (5) Data frame handling\nI0511 01:19:05.360969 3570 log.go:172] (0xc0005a8460) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0511 01:19:05.361605 3570 log.go:172] (0xc0000e8dc0) Data frame received for 3\nI0511 01:19:05.361640 3570 log.go:172] (0xc000550500) (3) Data frame handling\nI0511 01:19:05.361685 3570 log.go:172] (0xc0000e8dc0) Data frame received for 5\nI0511 01:19:05.361722 3570 log.go:172] (0xc0005a8460) (5) Data frame handling\nI0511 01:19:05.363617 3570 log.go:172] (0xc0000e8dc0) Data frame received for 1\nI0511 01:19:05.363642 3570 log.go:172] (0xc0003f6280) (1) Data frame handling\nI0511 01:19:05.363664 3570 log.go:172] (0xc0003f6280) (1) Data frame sent\nI0511 01:19:05.363690 3570 log.go:172] (0xc0000e8dc0) (0xc0003f6280) Stream removed, broadcasting: 1\nI0511 01:19:05.363803 3570 log.go:172] (0xc0000e8dc0) Go away received\nI0511 01:19:05.364174 3570 log.go:172] (0xc0000e8dc0) (0xc0003f6280) Stream removed, broadcasting: 1\nI0511 01:19:05.364201 3570 log.go:172] (0xc0000e8dc0) (0xc000550500) Stream removed, broadcasting: 3\nI0511 01:19:05.364212 3570 log.go:172] (0xc0000e8dc0) (0xc0005a8460) Stream removed, broadcasting: 5\n" May 11 01:19:05.370: INFO: stdout: "" May 11 01:19:05.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 execpod-affinitybmmhd -- /bin/sh -x -c nc -zv -t -w 2 10.107.52.181 80' May 11 01:19:05.597: INFO: stderr: "I0511 01:19:05.517884 3603 log.go:172] (0xc000b551e0) (0xc000af20a0) Create stream\nI0511 01:19:05.517940 3603 log.go:172] (0xc000b551e0) (0xc000af20a0) Stream added, broadcasting: 1\nI0511 01:19:05.523085 3603 log.go:172] (0xc000b551e0) Reply frame received for 1\nI0511 01:19:05.523126 3603 log.go:172] (0xc000b551e0) (0xc00084dea0) Create stream\nI0511 01:19:05.523136 3603 log.go:172] (0xc000b551e0) (0xc00084dea0) Stream added, broadcasting: 3\nI0511 01:19:05.523939 3603 log.go:172] (0xc000b551e0) Reply frame received for 3\nI0511 01:19:05.523994 3603 log.go:172] (0xc000b551e0) (0xc0005d2500) Create stream\nI0511 01:19:05.524013 3603 log.go:172] (0xc000b551e0) (0xc0005d2500) Stream added, broadcasting: 5\nI0511 01:19:05.524761 3603 log.go:172] (0xc000b551e0) Reply frame received for 5\nI0511 01:19:05.589971 3603 log.go:172] (0xc000b551e0) Data frame received for 3\nI0511 01:19:05.590023 3603 log.go:172] (0xc00084dea0) (3) Data frame handling\nI0511 01:19:05.590058 3603 log.go:172] (0xc000b551e0) Data frame received for 5\nI0511 01:19:05.590087 3603 log.go:172] (0xc0005d2500) (5) Data frame handling\nI0511 01:19:05.590117 3603 log.go:172] (0xc0005d2500) (5) Data frame sent\nI0511 01:19:05.590148 3603 log.go:172] (0xc000b551e0) Data frame received for 5\nI0511 01:19:05.590165 3603 log.go:172] (0xc0005d2500) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.52.181 80\nConnection to 10.107.52.181 80 port [tcp/http] succeeded!\nI0511 01:19:05.591302 3603 log.go:172] (0xc000b551e0) Data frame received for 1\nI0511 01:19:05.591388 3603 log.go:172] (0xc000af20a0) (1) Data frame handling\nI0511 01:19:05.591408 3603 log.go:172] (0xc000af20a0) (1) Data frame sent\nI0511 01:19:05.591420 3603 log.go:172] (0xc000b551e0) (0xc000af20a0) Stream removed, broadcasting: 1\nI0511 01:19:05.591431 3603 log.go:172] (0xc000b551e0) Go away received\nI0511 01:19:05.591849 3603 log.go:172] (0xc000b551e0) (0xc000af20a0) Stream removed, broadcasting: 1\nI0511 01:19:05.591872 3603 log.go:172] (0xc000b551e0) (0xc00084dea0) Stream removed, broadcasting: 3\nI0511 01:19:05.591883 3603 log.go:172] (0xc000b551e0) (0xc0005d2500) Stream removed, broadcasting: 5\n" May 11 01:19:05.597: INFO: stdout: "" May 11 01:19:05.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 execpod-affinitybmmhd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32261' May 11 01:19:05.833: INFO: stderr: "I0511 01:19:05.752547 3625 log.go:172] (0xc000a9e0b0) (0xc0006bef00) Create stream\nI0511 01:19:05.752649 3625 log.go:172] (0xc000a9e0b0) (0xc0006bef00) Stream added, broadcasting: 1\nI0511 01:19:05.755394 3625 log.go:172] (0xc000a9e0b0) Reply frame received for 1\nI0511 01:19:05.755475 3625 log.go:172] (0xc000a9e0b0) (0xc0005401e0) Create stream\nI0511 01:19:05.755507 3625 log.go:172] (0xc000a9e0b0) (0xc0005401e0) Stream added, broadcasting: 3\nI0511 01:19:05.756658 3625 log.go:172] (0xc000a9e0b0) Reply frame received for 3\nI0511 01:19:05.756695 3625 log.go:172] (0xc000a9e0b0) (0xc0000f50e0) Create stream\nI0511 01:19:05.756705 3625 log.go:172] (0xc000a9e0b0) (0xc0000f50e0) Stream added, broadcasting: 5\nI0511 01:19:05.757738 3625 log.go:172] (0xc000a9e0b0) Reply frame received for 5\nI0511 01:19:05.826072 3625 log.go:172] (0xc000a9e0b0) Data frame received for 5\nI0511 01:19:05.826167 3625 log.go:172] (0xc0000f50e0) (5) Data frame handling\nI0511 01:19:05.826205 3625 log.go:172] (0xc0000f50e0) (5) Data frame sent\nI0511 01:19:05.826235 3625 log.go:172] (0xc000a9e0b0) Data frame received for 5\nI0511 01:19:05.826251 3625 log.go:172] (0xc0000f50e0) (5) Data frame handling\nI0511 01:19:05.826268 3625 log.go:172] (0xc000a9e0b0) Data frame received for 3\nI0511 01:19:05.826281 3625 log.go:172] (0xc0005401e0) (3) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32261\nConnection to 172.17.0.13 32261 port [tcp/32261] succeeded!\nI0511 01:19:05.827901 3625 log.go:172] (0xc000a9e0b0) Data frame received for 1\nI0511 01:19:05.827934 3625 log.go:172] (0xc0006bef00) (1) Data frame handling\nI0511 01:19:05.827982 3625 log.go:172] (0xc0006bef00) (1) Data frame sent\nI0511 01:19:05.828014 3625 log.go:172] (0xc000a9e0b0) (0xc0006bef00) Stream removed, broadcasting: 1\nI0511 01:19:05.828039 3625 log.go:172] (0xc000a9e0b0) Go away received\nI0511 01:19:05.828530 3625 log.go:172] (0xc000a9e0b0) (0xc0006bef00) Stream removed, broadcasting: 1\nI0511 01:19:05.828557 3625 log.go:172] (0xc000a9e0b0) (0xc0005401e0) Stream removed, broadcasting: 3\nI0511 01:19:05.828571 3625 log.go:172] (0xc000a9e0b0) (0xc0000f50e0) Stream removed, broadcasting: 5\n" May 11 01:19:05.833: INFO: stdout: "" May 11 01:19:05.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 execpod-affinitybmmhd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32261' May 11 01:19:06.036: INFO: stderr: "I0511 01:19:05.952702 3647 log.go:172] (0xc0006a40b0) (0xc000139220) Create stream\nI0511 01:19:05.952779 3647 log.go:172] (0xc0006a40b0) (0xc000139220) Stream added, broadcasting: 1\nI0511 01:19:05.955497 3647 log.go:172] (0xc0006a40b0) Reply frame received for 1\nI0511 01:19:05.955537 3647 log.go:172] (0xc0006a40b0) (0xc000b2e000) Create stream\nI0511 01:19:05.955551 3647 log.go:172] (0xc0006a40b0) (0xc000b2e000) Stream added, broadcasting: 3\nI0511 01:19:05.956350 3647 log.go:172] (0xc0006a40b0) Reply frame received for 3\nI0511 01:19:05.956375 3647 log.go:172] (0xc0006a40b0) (0xc000b2e140) Create stream\nI0511 01:19:05.956383 3647 log.go:172] (0xc0006a40b0) (0xc000b2e140) Stream added, broadcasting: 5\nI0511 01:19:05.957070 3647 log.go:172] (0xc0006a40b0) Reply frame received for 5\nI0511 01:19:06.028355 3647 log.go:172] (0xc0006a40b0) Data frame received for 5\nI0511 01:19:06.028377 3647 log.go:172] (0xc000b2e140) (5) Data frame handling\nI0511 01:19:06.028392 3647 log.go:172] (0xc000b2e140) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32261\nConnection to 172.17.0.12 32261 port [tcp/32261] succeeded!\nI0511 01:19:06.028562 3647 log.go:172] (0xc0006a40b0) Data frame received for 3\nI0511 01:19:06.028600 3647 log.go:172] (0xc000b2e000) (3) Data frame handling\nI0511 01:19:06.028869 3647 log.go:172] (0xc0006a40b0) Data frame received for 5\nI0511 01:19:06.028881 3647 log.go:172] (0xc000b2e140) (5) Data frame handling\nI0511 01:19:06.030646 3647 log.go:172] (0xc0006a40b0) Data frame received for 1\nI0511 01:19:06.030669 3647 log.go:172] (0xc000139220) (1) Data frame handling\nI0511 01:19:06.030683 3647 log.go:172] (0xc000139220) (1) Data frame sent\nI0511 01:19:06.030699 3647 log.go:172] (0xc0006a40b0) (0xc000139220) Stream removed, broadcasting: 1\nI0511 01:19:06.030822 3647 log.go:172] (0xc0006a40b0) Go away received\nI0511 01:19:06.031071 3647 log.go:172] (0xc0006a40b0) (0xc000139220) Stream removed, broadcasting: 1\nI0511 01:19:06.031097 3647 log.go:172] (0xc0006a40b0) (0xc000b2e000) Stream removed, broadcasting: 3\nI0511 01:19:06.031114 3647 log.go:172] (0xc0006a40b0) (0xc000b2e140) Stream removed, broadcasting: 5\n" May 11 01:19:06.036: INFO: stdout: "" May 11 01:19:06.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 execpod-affinitybmmhd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32261/ ; done' May 11 01:19:06.302: INFO: stderr: "I0511 01:19:06.162400 3668 log.go:172] (0xc000a9d080) (0xc000348780) Create stream\nI0511 01:19:06.162459 3668 log.go:172] (0xc000a9d080) (0xc000348780) Stream added, broadcasting: 1\nI0511 01:19:06.164337 3668 log.go:172] (0xc000a9d080) Reply frame received for 1\nI0511 01:19:06.164363 3668 log.go:172] (0xc000a9d080) (0xc00028e000) Create stream\nI0511 01:19:06.164369 3668 log.go:172] (0xc000a9d080) (0xc00028e000) Stream added, broadcasting: 3\nI0511 01:19:06.165253 3668 log.go:172] (0xc000a9d080) Reply frame received for 3\nI0511 01:19:06.165292 3668 log.go:172] (0xc000a9d080) (0xc000516460) Create stream\nI0511 01:19:06.165308 3668 log.go:172] (0xc000a9d080) (0xc000516460) Stream added, broadcasting: 5\nI0511 01:19:06.165896 3668 log.go:172] (0xc000a9d080) Reply frame received for 5\nI0511 01:19:06.219227 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.219262 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.219275 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.219296 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.219303 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.219312 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.223921 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.223940 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.223957 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.224160 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.224181 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.224191 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.224201 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.224211 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.224217 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.228121 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.228139 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.228157 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.228569 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.228582 3668 log.go:172] (0xc000516460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.228601 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.228633 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.228648 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.228667 3668 log.go:172] (0xc000516460) (5) Data frame sent\nI0511 01:19:06.231759 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.231786 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.231803 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.232047 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.232075 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.232093 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.232106 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.232111 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.232117 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.236177 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.236194 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.236210 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.237380 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.237405 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.237417 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.237599 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.238704 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.238736 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.240982 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.241013 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.241041 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.241587 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.241608 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.241618 3668 log.go:172] (0xc000516460) (5) Data frame sent\nI0511 01:19:06.241641 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.241669 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.241686 3668 log.go:172] (0xc000a9d080) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.241697 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.241751 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.241772 3668 log.go:172] (0xc000516460) (5) Data frame sent\nI0511 01:19:06.244960 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.244982 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.244998 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.246144 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.246179 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.246200 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.246231 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.246253 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.246272 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.249799 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.249817 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.249827 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.250246 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.250282 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.250316 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.250334 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.250352 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.250362 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.255774 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.255795 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.255806 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.256197 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.256238 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.256269 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.256299 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.256317 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.256328 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\nI0511 01:19:06.256343 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.256384 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.256403 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.260598 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.260621 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.260648 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.261028 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.261045 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.261059 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.261303 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.261338 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.261357 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.267163 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.267183 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.267203 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.267517 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.267538 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.267548 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.267594 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.267615 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.267632 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.271672 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.271697 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.271713 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.272100 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.272123 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.272151 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.272185 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.272200 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.272213 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.276708 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.276739 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.276768 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.277327 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.277351 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.277374 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.277406 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.277423 3668 log.go:172] (0xc000516460) (5) Data frame sent\nI0511 01:19:06.277446 3668 log.go:172] (0xc00028e000) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.281741 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.281762 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.281776 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.282327 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.282346 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.282360 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.282375 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.282386 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.282403 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.285876 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.285895 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.285905 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.286181 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.286196 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.286209 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.286239 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.286260 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.286281 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.290437 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.290451 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.290460 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.290768 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.290783 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.290790 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.290802 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.290806 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.290811 3668 log.go:172] (0xc000516460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.295106 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.295121 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.295129 3668 log.go:172] (0xc00028e000) (3) Data frame sent\nI0511 01:19:06.295749 3668 log.go:172] (0xc000a9d080) Data frame received for 3\nI0511 01:19:06.295767 3668 log.go:172] (0xc00028e000) (3) Data frame handling\nI0511 01:19:06.296037 3668 log.go:172] (0xc000a9d080) Data frame received for 5\nI0511 01:19:06.296051 3668 log.go:172] (0xc000516460) (5) Data frame handling\nI0511 01:19:06.298217 3668 log.go:172] (0xc000a9d080) Data frame received for 1\nI0511 01:19:06.298234 3668 log.go:172] (0xc000348780) (1) Data frame handling\nI0511 01:19:06.298248 3668 log.go:172] (0xc000348780) (1) Data frame sent\nI0511 01:19:06.298418 3668 log.go:172] (0xc000a9d080) (0xc000348780) Stream removed, broadcasting: 1\nI0511 01:19:06.298684 3668 log.go:172] (0xc000a9d080) (0xc000348780) Stream removed, broadcasting: 1\nI0511 01:19:06.298705 3668 log.go:172] (0xc000a9d080) (0xc00028e000) Stream removed, broadcasting: 3\nI0511 01:19:06.298717 3668 log.go:172] (0xc000a9d080) (0xc000516460) Stream removed, broadcasting: 5\n" May 11 01:19:06.303: INFO: stdout: "\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl\naffinity-nodeport-timeout-z5hdl" May 11 01:19:06.303: INFO: Received response from host: May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Received response from host: affinity-nodeport-timeout-z5hdl May 11 01:19:06.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 execpod-affinitybmmhd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32261/' May 11 01:19:06.535: INFO: stderr: "I0511 01:19:06.451180 3689 log.go:172] (0xc00003b4a0) (0xc000b8a500) Create stream\nI0511 01:19:06.451222 3689 log.go:172] (0xc00003b4a0) (0xc000b8a500) Stream added, broadcasting: 1\nI0511 01:19:06.456919 3689 log.go:172] (0xc00003b4a0) Reply frame received for 1\nI0511 01:19:06.456964 3689 log.go:172] (0xc00003b4a0) (0xc000666c80) Create stream\nI0511 01:19:06.456981 3689 log.go:172] (0xc00003b4a0) (0xc000666c80) Stream added, broadcasting: 3\nI0511 01:19:06.458136 3689 log.go:172] (0xc00003b4a0) Reply frame received for 3\nI0511 01:19:06.458224 3689 log.go:172] (0xc00003b4a0) (0xc000667c20) Create stream\nI0511 01:19:06.458241 3689 log.go:172] (0xc00003b4a0) (0xc000667c20) Stream added, broadcasting: 5\nI0511 01:19:06.459149 3689 log.go:172] (0xc00003b4a0) Reply frame received for 5\nI0511 01:19:06.525932 3689 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0511 01:19:06.525965 3689 log.go:172] (0xc000667c20) (5) Data frame handling\nI0511 01:19:06.525986 3689 log.go:172] (0xc000667c20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:06.528754 3689 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0511 01:19:06.528766 3689 log.go:172] (0xc000666c80) (3) Data frame handling\nI0511 01:19:06.528773 3689 log.go:172] (0xc000666c80) (3) Data frame sent\nI0511 01:19:06.529512 3689 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0511 01:19:06.529543 3689 log.go:172] (0xc000666c80) (3) Data frame handling\nI0511 01:19:06.529759 3689 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0511 01:19:06.529782 3689 log.go:172] (0xc000667c20) (5) Data frame handling\nI0511 01:19:06.531064 3689 log.go:172] (0xc00003b4a0) Data frame received for 1\nI0511 01:19:06.531093 3689 log.go:172] (0xc000b8a500) (1) Data frame handling\nI0511 01:19:06.531112 3689 log.go:172] (0xc000b8a500) (1) Data frame sent\nI0511 01:19:06.531152 3689 log.go:172] (0xc00003b4a0) (0xc000b8a500) Stream removed, broadcasting: 1\nI0511 01:19:06.531187 3689 log.go:172] (0xc00003b4a0) Go away received\nI0511 01:19:06.531575 3689 log.go:172] (0xc00003b4a0) (0xc000b8a500) Stream removed, broadcasting: 1\nI0511 01:19:06.531595 3689 log.go:172] (0xc00003b4a0) (0xc000666c80) Stream removed, broadcasting: 3\nI0511 01:19:06.531605 3689 log.go:172] (0xc00003b4a0) (0xc000667c20) Stream removed, broadcasting: 5\n" May 11 01:19:06.535: INFO: stdout: "affinity-nodeport-timeout-z5hdl" May 11 01:19:21.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3970 execpod-affinitybmmhd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32261/' May 11 01:19:21.775: INFO: stderr: "I0511 01:19:21.680891 3709 log.go:172] (0xc00003adc0) (0xc00032bcc0) Create stream\nI0511 01:19:21.680950 3709 log.go:172] (0xc00003adc0) (0xc00032bcc0) Stream added, broadcasting: 1\nI0511 01:19:21.684214 3709 log.go:172] (0xc00003adc0) Reply frame received for 1\nI0511 01:19:21.684262 3709 log.go:172] (0xc00003adc0) (0xc00063e5a0) Create stream\nI0511 01:19:21.684278 3709 log.go:172] (0xc00003adc0) (0xc00063e5a0) Stream added, broadcasting: 3\nI0511 01:19:21.685655 3709 log.go:172] (0xc00003adc0) Reply frame received for 3\nI0511 01:19:21.685700 3709 log.go:172] (0xc00003adc0) (0xc00060aa00) Create stream\nI0511 01:19:21.685716 3709 log.go:172] (0xc00003adc0) (0xc00060aa00) Stream added, broadcasting: 5\nI0511 01:19:21.686729 3709 log.go:172] (0xc00003adc0) Reply frame received for 5\nI0511 01:19:21.764500 3709 log.go:172] (0xc00003adc0) Data frame received for 5\nI0511 01:19:21.764532 3709 log.go:172] (0xc00060aa00) (5) Data frame handling\nI0511 01:19:21.764548 3709 log.go:172] (0xc00060aa00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32261/\nI0511 01:19:21.766028 3709 log.go:172] (0xc00003adc0) Data frame received for 3\nI0511 01:19:21.766072 3709 log.go:172] (0xc00063e5a0) (3) Data frame handling\nI0511 01:19:21.766106 3709 log.go:172] (0xc00063e5a0) (3) Data frame sent\nI0511 01:19:21.766828 3709 log.go:172] (0xc00003adc0) Data frame received for 5\nI0511 01:19:21.766850 3709 log.go:172] (0xc00060aa00) (5) Data frame handling\nI0511 01:19:21.766885 3709 log.go:172] (0xc00003adc0) Data frame received for 3\nI0511 01:19:21.766915 3709 log.go:172] (0xc00063e5a0) (3) Data frame handling\nI0511 01:19:21.769261 3709 log.go:172] (0xc00003adc0) Data frame received for 1\nI0511 01:19:21.769285 3709 log.go:172] (0xc00032bcc0) (1) Data frame handling\nI0511 01:19:21.769295 3709 log.go:172] (0xc00032bcc0) (1) Data frame sent\nI0511 01:19:21.769418 3709 log.go:172] (0xc00003adc0) (0xc00032bcc0) Stream removed, broadcasting: 1\nI0511 01:19:21.769457 3709 log.go:172] (0xc00003adc0) Go away received\nI0511 01:19:21.769859 3709 log.go:172] (0xc00003adc0) (0xc00032bcc0) Stream removed, broadcasting: 1\nI0511 01:19:21.769887 3709 log.go:172] (0xc00003adc0) (0xc00063e5a0) Stream removed, broadcasting: 3\nI0511 01:19:21.769899 3709 log.go:172] (0xc00003adc0) (0xc00060aa00) Stream removed, broadcasting: 5\n" May 11 01:19:21.775: INFO: stdout: "affinity-nodeport-timeout-hrtx4" May 11 01:19:21.775: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-3970, will wait for the garbage collector to delete the pods May 11 01:19:22.269: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 355.487257ms May 11 01:19:22.769: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.232154ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:19:28.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3970" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:43.260 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":276,"skipped":4579,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:19:28.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:19:34.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1848" for this suite. STEP: Destroying namespace "nsdeletetest-469" for this suite. May 11 01:19:34.544: INFO: Namespace nsdeletetest-469 was already deleted STEP: Destroying namespace "nsdeletetest-2911" for this suite. • [SLOW TEST:6.449 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":277,"skipped":4601,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:19:34.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0511 01:19:35.686088 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 01:19:35.686: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:19:35.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2761" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":278,"skipped":4603,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:19:35.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0511 01:19:45.973779 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 01:19:45.973: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:19:45.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6197" for this suite. • [SLOW TEST:10.286 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":279,"skipped":4614,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:19:45.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 01:19:46.068: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 11 01:19:46.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:46.107: INFO: Number of nodes with available pods: 0 May 11 01:19:46.107: INFO: Node latest-worker is running more than one daemon pod May 11 01:19:47.112: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:47.116: INFO: Number of nodes with available pods: 0 May 11 01:19:47.116: INFO: Node latest-worker is running more than one daemon pod May 11 01:19:48.252: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:48.282: INFO: Number of nodes with available pods: 0 May 11 01:19:48.282: INFO: Node latest-worker is running more than one daemon pod May 11 01:19:49.214: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:49.217: INFO: Number of nodes with available pods: 0 May 11 01:19:49.217: INFO: Node latest-worker is running more than one daemon pod May 11 01:19:50.113: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:50.116: INFO: Number of nodes with available pods: 1 May 11 01:19:50.116: INFO: Node latest-worker2 is running more than one daemon pod May 11 01:19:51.115: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:51.120: INFO: Number of nodes with available pods: 2 May 11 01:19:51.120: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 11 01:19:51.267: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:51.267: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:51.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:52.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:52.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:52.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:53.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:53.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:53.281: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:54.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:54.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:54.279: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:55.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:55.276: INFO: Pod daemon-set-6c7nz is not available May 11 01:19:55.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:55.279: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:56.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:56.276: INFO: Pod daemon-set-6c7nz is not available May 11 01:19:56.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:56.281: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:57.316: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:57.316: INFO: Pod daemon-set-6c7nz is not available May 11 01:19:57.316: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:57.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:58.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:58.276: INFO: Pod daemon-set-6c7nz is not available May 11 01:19:58.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:58.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:19:59.298: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:59.298: INFO: Pod daemon-set-6c7nz is not available May 11 01:19:59.298: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:19:59.302: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:00.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:00.276: INFO: Pod daemon-set-6c7nz is not available May 11 01:20:00.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:00.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:01.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:01.276: INFO: Pod daemon-set-6c7nz is not available May 11 01:20:01.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:01.279: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:02.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:02.276: INFO: Pod daemon-set-6c7nz is not available May 11 01:20:02.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:02.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:03.275: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:03.275: INFO: Pod daemon-set-6c7nz is not available May 11 01:20:03.275: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:03.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:04.276: INFO: Wrong image for pod: daemon-set-6c7nz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:04.276: INFO: Pod daemon-set-6c7nz is not available May 11 01:20:04.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:04.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:05.277: INFO: Pod daemon-set-9jvrf is not available May 11 01:20:05.277: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:05.347: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:06.276: INFO: Pod daemon-set-9jvrf is not available May 11 01:20:06.276: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:06.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:07.364: INFO: Pod daemon-set-9jvrf is not available May 11 01:20:07.364: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:07.369: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:08.277: INFO: Pod daemon-set-9jvrf is not available May 11 01:20:08.277: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:08.281: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:09.307: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:09.311: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:10.277: INFO: Wrong image for pod: daemon-set-cxntt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 01:20:10.277: INFO: Pod daemon-set-cxntt is not available May 11 01:20:10.281: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:11.287: INFO: Pod daemon-set-5rcs4 is not available May 11 01:20:11.291: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 11 01:20:11.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:11.299: INFO: Number of nodes with available pods: 1 May 11 01:20:11.299: INFO: Node latest-worker is running more than one daemon pod May 11 01:20:12.304: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:12.308: INFO: Number of nodes with available pods: 1 May 11 01:20:12.308: INFO: Node latest-worker is running more than one daemon pod May 11 01:20:13.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:13.314: INFO: Number of nodes with available pods: 1 May 11 01:20:13.314: INFO: Node latest-worker is running more than one daemon pod May 11 01:20:14.303: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 01:20:14.307: INFO: Number of nodes with available pods: 2 May 11 01:20:14.307: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4204, will wait for the garbage collector to delete the pods May 11 01:20:14.380: INFO: Deleting DaemonSet.extensions daemon-set took: 6.872186ms May 11 01:20:14.680: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.296764ms May 11 01:20:24.883: INFO: Number of nodes with available pods: 0 May 11 01:20:24.883: INFO: Number of running nodes: 0, number of available pods: 0 May 11 01:20:24.885: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4204/daemonsets","resourceVersion":"3235486"},"items":null} May 11 01:20:24.887: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4204/pods","resourceVersion":"3235486"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:20:24.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4204" for this suite. • [SLOW TEST:38.923 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":280,"skipped":4624,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:20:24.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 11 01:20:25.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2305' May 11 01:20:25.271: INFO: stderr: "" May 11 01:20:25.271: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 01:20:25.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2305' May 11 01:20:25.405: INFO: stderr: "" May 11 01:20:25.405: INFO: stdout: "update-demo-nautilus-5jq8t update-demo-nautilus-szc5v " May 11 01:20:25.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jq8t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2305' May 11 01:20:25.520: INFO: stderr: "" May 11 01:20:25.520: INFO: stdout: "" May 11 01:20:25.520: INFO: update-demo-nautilus-5jq8t is created but not running May 11 01:20:30.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2305' May 11 01:20:30.655: INFO: stderr: "" May 11 01:20:30.655: INFO: stdout: "update-demo-nautilus-5jq8t update-demo-nautilus-szc5v " May 11 01:20:30.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jq8t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2305' May 11 01:20:30.746: INFO: stderr: "" May 11 01:20:30.746: INFO: stdout: "true" May 11 01:20:30.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jq8t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2305' May 11 01:20:30.844: INFO: stderr: "" May 11 01:20:30.844: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:20:30.844: INFO: validating pod update-demo-nautilus-5jq8t May 11 01:20:30.849: INFO: got data: { "image": "nautilus.jpg" } May 11 01:20:30.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:20:30.849: INFO: update-demo-nautilus-5jq8t is verified up and running May 11 01:20:30.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-szc5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2305' May 11 01:20:30.965: INFO: stderr: "" May 11 01:20:30.965: INFO: stdout: "true" May 11 01:20:30.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-szc5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2305' May 11 01:20:31.062: INFO: stderr: "" May 11 01:20:31.062: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 01:20:31.062: INFO: validating pod update-demo-nautilus-szc5v May 11 01:20:31.066: INFO: got data: { "image": "nautilus.jpg" } May 11 01:20:31.066: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 01:20:31.066: INFO: update-demo-nautilus-szc5v is verified up and running STEP: using delete to clean up resources May 11 01:20:31.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2305' May 11 01:20:31.172: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 01:20:31.172: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 01:20:31.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2305' May 11 01:20:31.274: INFO: stderr: "No resources found in kubectl-2305 namespace.\n" May 11 01:20:31.274: INFO: stdout: "" May 11 01:20:31.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2305 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 01:20:31.410: INFO: stderr: "" May 11 01:20:31.411: INFO: stdout: "update-demo-nautilus-5jq8t\nupdate-demo-nautilus-szc5v\n" May 11 01:20:31.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2305' May 11 01:20:32.080: INFO: stderr: "No resources found in kubectl-2305 namespace.\n" May 11 01:20:32.080: INFO: stdout: "" May 11 01:20:32.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2305 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 01:20:32.250: INFO: stderr: "" May 11 01:20:32.250: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:20:32.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2305" for this suite. • [SLOW TEST:7.354 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":281,"skipped":4635,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:20:32.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 01:20:42.651: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:42.674: INFO: Pod pod-with-poststart-http-hook still exists May 11 01:20:44.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:44.678: INFO: Pod pod-with-poststart-http-hook still exists May 11 01:20:46.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:46.679: INFO: Pod pod-with-poststart-http-hook still exists May 11 01:20:48.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:48.678: INFO: Pod pod-with-poststart-http-hook still exists May 11 01:20:50.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:50.679: INFO: Pod pod-with-poststart-http-hook still exists May 11 01:20:52.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:52.679: INFO: Pod pod-with-poststart-http-hook still exists May 11 01:20:54.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:54.679: INFO: Pod pod-with-poststart-http-hook still exists May 11 01:20:56.674: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 01:20:56.678: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:20:56.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3297" for this suite. • [SLOW TEST:24.429 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4647,"failed":0} [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:20:56.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 01:20:56.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6" in namespace "downward-api-6578" to be "Succeeded or Failed" May 11 01:20:56.808: INFO: Pod "downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.044027ms May 11 01:20:58.812: INFO: Pod "downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025360778s May 11 01:21:00.817: INFO: Pod "downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030256002s STEP: Saw pod success May 11 01:21:00.817: INFO: Pod "downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6" satisfied condition "Succeeded or Failed" May 11 01:21:00.821: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6 container client-container: STEP: delete the pod May 11 01:21:00.898: INFO: Waiting for pod downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6 to disappear May 11 01:21:00.906: INFO: Pod downwardapi-volume-36fee349-9e5a-488e-a7a9-c3f06cb53bf6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:21:00.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6578" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":283,"skipped":4647,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:21:00.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3755 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 01:21:00.991: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 01:21:01.054: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 01:21:03.232: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 01:21:05.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:21:07.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:21:09.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:21:11.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:21:13.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:21:15.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:21:17.059: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 01:21:19.058: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 01:21:19.064: INFO: The status of Pod netserver-1 is Running (Ready = false) May 11 01:21:21.068: INFO: The status of Pod netserver-1 is Running (Ready = false) May 11 01:21:23.069: INFO: The status of Pod netserver-1 is Running (Ready = false) May 11 01:21:25.068: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 01:21:29.090: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.215:8080/dial?request=hostname&protocol=udp&host=10.244.1.214&port=8081&tries=1'] Namespace:pod-network-test-3755 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:21:29.090: INFO: >>> kubeConfig: /root/.kube/config I0511 01:21:29.123765 7 log.go:172] (0xc005ebe000) (0xc002a34960) Create stream I0511 01:21:29.123806 7 log.go:172] (0xc005ebe000) (0xc002a34960) Stream added, broadcasting: 1 I0511 01:21:29.126023 7 log.go:172] (0xc005ebe000) Reply frame received for 1 I0511 01:21:29.126076 7 log.go:172] (0xc005ebe000) (0xc000faa0a0) Create stream I0511 01:21:29.126091 7 log.go:172] (0xc005ebe000) (0xc000faa0a0) Stream added, broadcasting: 3 I0511 01:21:29.127010 7 log.go:172] (0xc005ebe000) Reply frame received for 3 I0511 01:21:29.127048 7 log.go:172] (0xc005ebe000) (0xc00184c6e0) Create stream I0511 01:21:29.127060 7 log.go:172] (0xc005ebe000) (0xc00184c6e0) Stream added, broadcasting: 5 I0511 01:21:29.127792 7 log.go:172] (0xc005ebe000) Reply frame received for 5 I0511 01:21:29.226292 7 log.go:172] (0xc005ebe000) Data frame received for 3 I0511 01:21:29.226330 7 log.go:172] (0xc000faa0a0) (3) Data frame handling I0511 01:21:29.226358 7 log.go:172] (0xc000faa0a0) (3) Data frame sent I0511 01:21:29.227008 7 log.go:172] (0xc005ebe000) Data frame received for 5 I0511 01:21:29.227040 7 log.go:172] (0xc00184c6e0) (5) Data frame handling I0511 01:21:29.227060 7 log.go:172] (0xc005ebe000) Data frame received for 3 I0511 01:21:29.227068 7 log.go:172] (0xc000faa0a0) (3) Data frame handling I0511 01:21:29.228573 7 log.go:172] (0xc005ebe000) Data frame received for 1 I0511 01:21:29.228589 7 log.go:172] (0xc002a34960) (1) Data frame handling I0511 01:21:29.228601 7 log.go:172] (0xc002a34960) (1) Data frame sent I0511 01:21:29.228614 7 log.go:172] (0xc005ebe000) (0xc002a34960) Stream removed, broadcasting: 1 I0511 01:21:29.228624 7 log.go:172] (0xc005ebe000) Go away received I0511 01:21:29.228770 7 log.go:172] (0xc005ebe000) (0xc002a34960) Stream removed, broadcasting: 1 I0511 01:21:29.228800 7 log.go:172] (0xc005ebe000) (0xc000faa0a0) Stream removed, broadcasting: 3 I0511 01:21:29.228823 7 log.go:172] (0xc005ebe000) (0xc00184c6e0) Stream removed, broadcasting: 5 May 11 01:21:29.228: INFO: Waiting for responses: map[] May 11 01:21:29.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.215:8080/dial?request=hostname&protocol=udp&host=10.244.2.37&port=8081&tries=1'] Namespace:pod-network-test-3755 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 01:21:29.232: INFO: >>> kubeConfig: /root/.kube/config I0511 01:21:29.266003 7 log.go:172] (0xc002c84a50) (0xc00184d360) Create stream I0511 01:21:29.266053 7 log.go:172] (0xc002c84a50) (0xc00184d360) Stream added, broadcasting: 1 I0511 01:21:29.271545 7 log.go:172] (0xc002c84a50) Reply frame received for 1 I0511 01:21:29.271596 7 log.go:172] (0xc002c84a50) (0xc000faa320) Create stream I0511 01:21:29.271613 7 log.go:172] (0xc002c84a50) (0xc000faa320) Stream added, broadcasting: 3 I0511 01:21:29.272681 7 log.go:172] (0xc002c84a50) Reply frame received for 3 I0511 01:21:29.272709 7 log.go:172] (0xc002c84a50) (0xc002a34aa0) Create stream I0511 01:21:29.272721 7 log.go:172] (0xc002c84a50) (0xc002a34aa0) Stream added, broadcasting: 5 I0511 01:21:29.273841 7 log.go:172] (0xc002c84a50) Reply frame received for 5 I0511 01:21:29.349591 7 log.go:172] (0xc002c84a50) Data frame received for 3 I0511 01:21:29.349634 7 log.go:172] (0xc000faa320) (3) Data frame handling I0511 01:21:29.349658 7 log.go:172] (0xc000faa320) (3) Data frame sent I0511 01:21:29.350099 7 log.go:172] (0xc002c84a50) Data frame received for 5 I0511 01:21:29.350121 7 log.go:172] (0xc002a34aa0) (5) Data frame handling I0511 01:21:29.350151 7 log.go:172] (0xc002c84a50) Data frame received for 3 I0511 01:21:29.350166 7 log.go:172] (0xc000faa320) (3) Data frame handling I0511 01:21:29.352167 7 log.go:172] (0xc002c84a50) Data frame received for 1 I0511 01:21:29.352187 7 log.go:172] (0xc00184d360) (1) Data frame handling I0511 01:21:29.352199 7 log.go:172] (0xc00184d360) (1) Data frame sent I0511 01:21:29.352210 7 log.go:172] (0xc002c84a50) (0xc00184d360) Stream removed, broadcasting: 1 I0511 01:21:29.352305 7 log.go:172] (0xc002c84a50) Go away received I0511 01:21:29.352351 7 log.go:172] (0xc002c84a50) (0xc00184d360) Stream removed, broadcasting: 1 I0511 01:21:29.352365 7 log.go:172] (0xc002c84a50) (0xc000faa320) Stream removed, broadcasting: 3 I0511 01:21:29.352371 7 log.go:172] (0xc002c84a50) (0xc002a34aa0) Stream removed, broadcasting: 5 May 11 01:21:29.352: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:21:29.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3755" for this suite. • [SLOW TEST:28.445 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:21:29.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 11 01:21:29.438: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4748" to be "Succeeded or Failed" May 11 01:21:29.456: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.760301ms May 11 01:21:31.461: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022372774s May 11 01:21:33.464: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025994723s May 11 01:21:35.484: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045857337s STEP: Saw pod success May 11 01:21:35.484: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 11 01:21:35.487: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 11 01:21:35.532: INFO: Waiting for pod pod-host-path-test to disappear May 11 01:21:35.578: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:21:35.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4748" for this suite. • [SLOW TEST:6.388 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":285,"skipped":4730,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:21:35.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-74b986f1-ffad-41fc-952f-2742ea8eea65 STEP: Creating a pod to test consume configMaps May 11 01:21:36.350: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a" in namespace "projected-8498" to be "Succeeded or Failed" May 11 01:21:36.628: INFO: Pod "pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a": Phase="Pending", Reason="", readiness=false. Elapsed: 278.405106ms May 11 01:21:38.632: INFO: Pod "pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282533782s May 11 01:21:40.676: INFO: Pod "pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326117472s May 11 01:21:42.680: INFO: Pod "pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.330579075s STEP: Saw pod success May 11 01:21:42.680: INFO: Pod "pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a" satisfied condition "Succeeded or Failed" May 11 01:21:42.684: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a container projected-configmap-volume-test: STEP: delete the pod May 11 01:21:44.867: INFO: Waiting for pod pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a to disappear May 11 01:21:44.944: INFO: Pod pod-projected-configmaps-65306b18-a683-4b6d-9631-7fe5167a538a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:21:44.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8498" for this suite. • [SLOW TEST:9.264 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":286,"skipped":4732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:21:45.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 01:21:45.199: INFO: Waiting up to 5m0s for pod "pod-f8142865-48d6-4326-80d4-6abb49a1feda" in namespace "emptydir-2592" to be "Succeeded or Failed" May 11 01:21:45.400: INFO: Pod "pod-f8142865-48d6-4326-80d4-6abb49a1feda": Phase="Pending", Reason="", readiness=false. Elapsed: 200.967624ms May 11 01:21:47.484: INFO: Pod "pod-f8142865-48d6-4326-80d4-6abb49a1feda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284486824s May 11 01:21:49.488: INFO: Pod "pod-f8142865-48d6-4326-80d4-6abb49a1feda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.288865707s STEP: Saw pod success May 11 01:21:49.488: INFO: Pod "pod-f8142865-48d6-4326-80d4-6abb49a1feda" satisfied condition "Succeeded or Failed" May 11 01:21:49.491: INFO: Trying to get logs from node latest-worker2 pod pod-f8142865-48d6-4326-80d4-6abb49a1feda container test-container: STEP: delete the pod May 11 01:21:49.611: INFO: Waiting for pod pod-f8142865-48d6-4326-80d4-6abb49a1feda to disappear May 11 01:21:49.626: INFO: Pod pod-f8142865-48d6-4326-80d4-6abb49a1feda no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:21:49.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2592" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4771,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 01:21:49.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e6a35b6a-a9c4-4f2c-97a5-5ee2cd11cb34 STEP: Creating a pod to test consume configMaps May 11 01:21:49.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870" in namespace "projected-4537" to be "Succeeded or Failed" May 11 01:21:49.774: INFO: Pod "pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870": Phase="Pending", Reason="", readiness=false. Elapsed: 25.269345ms May 11 01:21:51.843: INFO: Pod "pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094769957s May 11 01:21:53.848: INFO: Pod "pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099278897s STEP: Saw pod success May 11 01:21:53.848: INFO: Pod "pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870" satisfied condition "Succeeded or Failed" May 11 01:21:53.851: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870 container projected-configmap-volume-test: STEP: delete the pod May 11 01:21:53.914: INFO: Waiting for pod pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870 to disappear May 11 01:21:53.916: INFO: Pod pod-projected-configmaps-4342e518-9cc3-466f-b9cb-dbfbd658c870 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 01:21:53.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4537" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 11 01:21:53.922: INFO: Running AfterSuite actions on all nodes May 11 01:21:53.922: INFO: Running AfterSuite actions on node 1 May 11 01:21:53.922: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5537.184 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS